A dataset represents an SQL query, or more generally, an abstract set of rows in the database. Datasets can be used to create, retrieve, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].filter(:author => 'david') # no records are retrieved my_posts.all # records are retrieved my_posts.all # records are retrieved again
Most dataset methods return modified copies of the dataset (functional style), so you can reuse different datasets to access data:
posts = DB[:posts] davids_posts = posts.filter(:author => 'david') old_posts = posts.filter('stamp < ?', Date.today - 7) davids_old_posts = davids_posts.filter('stamp < ?', Date.today - 7)
Datasets are Enumerable objects, so they can be manipulated using any of the Enumerable methods, such as map, inject, etc.
For more information, see the "Dataset Basics" guide.
PREPARED_ARG_PLACEHOLDER | = | ':'.freeze |
PREPARED_ARG_PLACEHOLDER | = | LiteralString.new('$').freeze |
columns | -> | columns_without_introspection |
convert_types | [RW] | Whether to convert some Java types to ruby types when retrieving rows. Uses the database‘s setting by default, can be set to false to roughly double performance when fetching rows. |
Enable column introspection for every dataset.
# File lib/sequel/extensions/columns_introspection.rb, line 56 56: def self.introspect_all_columns 57: include ColumnsIntrospection 58: remove_method(:columns) if instance_methods(false).map{|x| x.to_s}.include?('columns') 59: end
MySQL is different in that it supports prepared statements but not bound variables outside of prepared statements. The default implementation breaks the use of subselects in prepared statements, so extend the temporary prepared statement that this creates with a module that fixes it.
# File lib/sequel/adapters/mysql.rb, line 351 351: def call(type, bind_arguments={}, *values, &block) 352: ps = to_prepared_statement(type, values) 353: ps.extend(CallableStatementMethods) 354: ps.call(bind_arguments, &block) 355: end
Delete rows matching this dataset
# File lib/sequel/adapters/mysql.rb, line 358 358: def delete 359: execute_dui(delete_sql){|c| return c.affected_rows} 360: end
Yields a paginated dataset for each page and returns the receiver. Does a count to find the total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 20 20: def each_page(page_size) 21: raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] 22: record_count = count 23: total_pages = (record_count / page_size.to_f).ceil 24: (1..total_pages).each{|page_no| yield paginate(page_no, page_size, record_count)} 25: self 26: end
Execute the SQL on the database and yield the rows as hashes with symbol keys.
# File lib/sequel/adapters/do.rb, line 175 175: def fetch_rows(sql) 176: execute(sql) do |reader| 177: cols = @columns = reader.fields.map{|f| output_identifier(f)} 178: while(reader.next!) do 179: h = {} 180: cols.zip(reader.values).each{|k, v| h[k] = v} 181: yield h 182: end 183: end 184: self 185: end
Yield all rows matching this dataset. If the dataset is set to split multiple statements, yield arrays of hashes one per statement instead of yielding results for all statements as hashes.
# File lib/sequel/adapters/mysql.rb, line 365 365: def fetch_rows(sql, &block) 366: execute(sql) do |r| 367: i = -1 368: cols = r.fetch_fields.map do |f| 369: # Pretend tinyint is another integer type if its length is not 1, to 370: # avoid casting to boolean if Sequel::MySQL.convert_tinyint_to_bool 371: # is set. 372: type_proc = f.type == 1 && f.length != 1 ? MYSQL_TYPES[2] : MYSQL_TYPES[f.type] 373: [output_identifier(f.name), type_proc, i+=1] 374: end 375: @columns = cols.map{|c| c.first} 376: if opts[:split_multiple_result_sets] 377: s = [] 378: yield_rows(r, cols){|h| s << h} 379: yield s 380: else 381: yield_rows(r, cols, &block) 382: end 383: end 384: self 385: end
Yield all rows returned by executing the given SQL and converting the types.
# File lib/sequel/adapters/postgres.rb, line 340 340: def fetch_rows(sql, &block) 341: return cursor_fetch_rows(sql, &block) if @opts[:cursor] 342: execute(sql){|res| yield_hash_rows(res, fetch_rows_set_cols(res), &block)} 343: end
Yield a hash for each row in the dataset.
# File lib/sequel/adapters/sqlite.rb, line 292 292: def fetch_rows(sql) 293: execute(sql) do |result| 294: i = -1 295: type_procs = result.types.map{|t| SQLITE_TYPES[base_type_name(t)]} 296: cols = result.columns.map{|c| i+=1; [output_identifier(c), i, type_procs[i]]} 297: @columns = cols.map{|c| c.first} 298: result.each do |values| 299: row = {} 300: cols.each do |name,i,type_proc| 301: v = values[i] 302: if type_proc && v.is_a?(String) 303: v = type_proc.call(v) 304: end 305: row[name] = v 306: end 307: yield row 308: end 309: end 310: end
Don‘t allow graphing a dataset that splits multiple statements
# File lib/sequel/adapters/mysql.rb, line 388 388: def graph(*) 389: raise(Error, "Can't graph a dataset that splits multiple result sets") if opts[:split_multiple_result_sets] 390: super 391: end
Returns a paginated dataset. The returned dataset is limited to the page size at the correct offset, and extended with the Pagination module. If a record count is not provided, does a count of total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 11 11: def paginate(page_no, page_size, record_count=nil) 12: raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] 13: paginated = limit(page_size, (page_no - 1) * page_size) 14: paginated.extend(Pagination) 15: paginated.set_pagination_info(page_no, page_size, record_count || count) 16: end
Prepare the given type of query with the given name and store it in the database. Note that a new native prepared statement is created on each call to this prepared statement.
# File lib/sequel/adapters/sqlite.rb, line 315 315: def prepare(type, name=nil, *values) 316: ps = to_prepared_statement(type, values) 317: ps.extend(PreparedStatementMethods) 318: if name 319: ps.prepared_statement_name = name 320: db.prepared_statements[name] = ps 321: end 322: ps.prepared_sql 323: ps 324: end
Prepare the given type of statement with the given name, and store it in the database to be called later.
# File lib/sequel/adapters/postgres.rb, line 461 461: def prepare(type, name=nil, *values) 462: ps = to_prepared_statement(type, values) 463: ps.extend(PreparedStatementMethods) 464: if name 465: ps.prepared_statement_name = name 466: db.prepared_statements[name] = ps 467: end 468: ps 469: end
Store the given type of prepared statement in the associated database with the given name.
# File lib/sequel/adapters/mysql.rb, line 400 400: def prepare(type, name=nil, *values) 401: ps = to_prepared_statement(type, values) 402: ps.extend(PreparedStatementMethods) 403: if name 404: ps.prepared_statement_name = name 405: db.prepared_statements[name] = ps 406: end 407: ps 408: end
Create a named prepared statement that is stored in the database (and connection) for reuse.
# File lib/sequel/adapters/jdbc.rb, line 567 567: def prepare(type, name=nil, *values) 568: ps = to_prepared_statement(type, values) 569: ps.extend(PreparedStatementMethods) 570: if name 571: ps.prepared_statement_name = name 572: db.prepared_statements[name] = ps 573: end 574: ps 575: end
Translates a query block into a dataset. Query blocks can be useful when expressing complex SELECT statements, e.g.:
dataset = DB[:items].query do select :x, :y, :z filter{|o| (o.x > 1) & (o.y > 2)} order :z.desc end
Which is the same as:
dataset = DB[:items].select(:x, :y, :z).filter{|o| (o.x > 1) & (o.y > 2)}.order(:z.desc)
Note that inside a call to query, you cannot call each, insert, update, or delete (or any method that calls those), or Sequel will raise an error.
# File lib/sequel/extensions/query.rb, line 30 30: def query(&block) 31: copy = clone({}) 32: copy.extend(QueryBlockCopy) 33: copy.instance_eval(&block) 34: clone(copy.opts) 35: end
Makes each yield arrays of rows, with each array containing the rows for a given result set. Does not work with graphing. So you can submit SQL with multiple statements and easily determine which statement returned which results.
Modifies the row_proc of the returned dataset so that it still works as expected (running on the hashes instead of on the arrays of hashes). If you modify the row_proc afterward, note that it will receive an array of hashes instead of a hash.
# File lib/sequel/adapters/mysql.rb, line 424 424: def split_multiple_result_sets 425: raise(Error, "Can't split multiple statements on a graphed dataset") if opts[:graph] 426: ds = clone(:split_multiple_result_sets=>true) 427: ds.row_proc = proc{|x| x.map{|h| row_proc.call(h)}} if row_proc 428: ds 429: end
Update the matching rows.
# File lib/sequel/adapters/mysql.rb, line 432 432: def update(values={}) 433: execute_dui(update_sql(values)){|c| return c.affected_rows} 434: end
Uses a cursor for fetching records, instead of fetching the entire result set at once. Can be used to process large datasets without holding all rows in memory (which is what the underlying drivers do by default). Options:
Usage:
DB[:huge_table].use_cursor.each{|row| p row} DB[:huge_table].use_cursor(:rows_per_fetch=>10000).each{|row| p row}
This is untested with the prepared statement/bound variable support, and unlikely to work with either.
# File lib/sequel/adapters/postgres.rb, line 360 360: def use_cursor(opts={}) 361: clone(:cursor=>{:rows_per_fetch=>1000}.merge(opts)) 362: end
Returns a DELETE SQL query string. See delete.
dataset.filter{|o| o.price >= 100}.delete_sql # => "DELETE FROM items WHERE (price >= 100)"
# File lib/sequel/dataset/sql.rb, line 12 12: def delete_sql 13: return static_sql(opts[:sql]) if opts[:sql] 14: check_modification_allowed! 15: clause_sql(:delete) 16: end
Returns an EXISTS clause for the dataset as a LiteralString.
DB.select(1).where(DB[:items].exists) # SELECT 1 WHERE (EXISTS (SELECT * FROM items))
# File lib/sequel/dataset/sql.rb, line 22 22: def exists 23: LiteralString.new("EXISTS (#{select_sql})") 24: end
Returns an INSERT SQL query string. See insert.
DB[:items].insert_sql(:a=>1) # => "INSERT INTO items (a) VALUES (1)"
# File lib/sequel/dataset/sql.rb, line 30 30: def insert_sql(*values) 31: return static_sql(@opts[:sql]) if @opts[:sql] 32: 33: check_modification_allowed! 34: 35: columns = [] 36: 37: case values.size 38: when 0 39: return insert_sql({}) 40: when 1 41: case vals = values.at(0) 42: when Hash 43: vals = @opts[:defaults].merge(vals) if @opts[:defaults] 44: vals = vals.merge(@opts[:overrides]) if @opts[:overrides] 45: values = [] 46: vals.each do |k,v| 47: columns << k 48: values << v 49: end 50: when Dataset, Array, LiteralString 51: values = vals 52: else 53: if vals.respond_to?(:values) && (v = vals.values).is_a?(Hash) 54: return insert_sql(v) 55: end 56: end 57: when 2 58: if (v0 = values.at(0)).is_a?(Array) && ((v1 = values.at(1)).is_a?(Array) || v1.is_a?(Dataset) || v1.is_a?(LiteralString)) 59: columns, values = v0, v1 60: raise(Error, "Different number of values and columns given to insert_sql") if values.is_a?(Array) and columns.length != values.length 61: end 62: end 63: 64: columns = columns.map{|k| literal(String === k ? k.to_sym : k)} 65: clone(:columns=>columns, :values=>values)._insert_sql 66: end
Returns a literal representation of a value to be used as part of an SQL expression.
DB[:items].literal("abc'def\\") #=> "'abc''def\\\\'" DB[:items].literal(:items__id) #=> "items.id" DB[:items].literal([1, 2, 3]) => "(1, 2, 3)" DB[:items].literal(DB[:items]) => "(SELECT * FROM items)" DB[:items].literal(:x + 1 > :y) => "((x + 1) > y)"
If an unsupported object is given, an Error is raised.
# File lib/sequel/dataset/sql.rb, line 78 78: def literal(v) 79: case v 80: when String 81: return v if v.is_a?(LiteralString) 82: v.is_a?(SQL::Blob) ? literal_blob(v) : literal_string(v) 83: when Symbol 84: literal_symbol(v) 85: when Integer 86: literal_integer(v) 87: when Hash 88: literal_hash(v) 89: when SQL::Expression 90: literal_expression(v) 91: when Float 92: literal_float(v) 93: when BigDecimal 94: literal_big_decimal(v) 95: when NilClass 96: literal_nil 97: when TrueClass 98: literal_true 99: when FalseClass 100: literal_false 101: when Array 102: literal_array(v) 103: when Time 104: literal_time(v) 105: when DateTime 106: literal_datetime(v) 107: when Date 108: literal_date(v) 109: when Dataset 110: literal_dataset(v) 111: else 112: literal_other(v) 113: end 114: end
Returns an array of insert statements for inserting multiple records. This method is used by multi_insert to format insert statements and expects a keys array and and an array of value arrays.
This method should be overridden by descendants if the support inserting multiple records in a single SQL statement.
# File lib/sequel/dataset/sql.rb, line 122 122: def multi_insert_sql(columns, values) 123: values.map{|r| insert_sql(columns, r)} 124: end
Same as select_sql, not aliased directly to make subclassing simpler.
# File lib/sequel/dataset/sql.rb, line 135 135: def sql 136: select_sql 137: end
Returns a TRUNCATE SQL query string. See truncate
DB[:items].truncate_sql # => 'TRUNCATE items'
# File lib/sequel/dataset/sql.rb, line 142 142: def truncate_sql 143: if opts[:sql] 144: static_sql(opts[:sql]) 145: else 146: check_modification_allowed! 147: raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] 148: _truncate_sql(source_list(opts[:from])) 149: end 150: end
Formats an UPDATE statement using the given values. See update.
DB[:items].update_sql(:price => 100, :category => 'software') # => "UPDATE items SET price = 100, category = 'software'
Raises an Error if the dataset is grouped or includes more than one table.
# File lib/sequel/dataset/sql.rb, line 159 159: def update_sql(values = {}) 160: return static_sql(opts[:sql]) if opts[:sql] 161: check_modification_allowed! 162: clone(:values=>values)._update_sql 163: end
These methods, while public, are not designed to be used directly by the end user.
AND_SEPARATOR | = | " AND ".freeze |
BOOL_FALSE | = | "'f'".freeze |
BOOL_TRUE | = | "'t'".freeze |
COMMA_SEPARATOR | = | ', '.freeze |
COLUMN_REF_RE1 | = | /\A(((?!__).)+)__(((?!___).)+)___(.+)\z/.freeze |
COLUMN_REF_RE2 | = | /\A(((?!___).)+)___(.+)\z/.freeze |
COLUMN_REF_RE3 | = | /\A(((?!__).)+)__(.+)\z/.freeze |
COUNT_FROM_SELF_OPTS | = | [:distinct, :group, :sql, :limit, :compounds] |
COUNT_OF_ALL_AS_COUNT | = | SQL::Function.new(:count, LiteralString.new('*'.freeze)).as(:count) |
DATASET_ALIAS_BASE_NAME | = | 't'.freeze |
FOR_UPDATE | = | ' FOR UPDATE'.freeze |
IS_LITERALS | = | {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze |
IS_OPERATORS | = | ::Sequel::SQL::ComplexExpression::IS_OPERATORS |
N_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS |
NULL | = | "NULL".freeze |
QUALIFY_KEYS | = | [:select, :where, :having, :order, :group] |
QUESTION_MARK | = | '?'.freeze |
DELETE_CLAUSE_METHODS | = | clause_methods(:delete, %w'from where') |
INSERT_CLAUSE_METHODS | = | clause_methods(:insert, %w'into columns values') |
SELECT_CLAUSE_METHODS | = | clause_methods(:select, %w'with distinct columns from join where group having compounds order limit lock') |
UPDATE_CLAUSE_METHODS | = | clause_methods(:update, %w'table set where') |
TIMESTAMP_FORMAT | = | "'%Y-%m-%d %H:%M:%S%N%z'".freeze |
STANDARD_TIMESTAMP_FORMAT | = | "TIMESTAMP #{TIMESTAMP_FORMAT}".freeze |
TWO_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS |
WILDCARD | = | LiteralString.new('*').freeze |
SQL_WITH | = | "WITH ".freeze |
SQL fragment for CaseExpression
# File lib/sequel/dataset/sql.rb, line 219 219: def case_expression_sql(ce) 220: sql = '(CASE ' 221: sql << "#{literal(ce.expression)} " if ce.expression? 222: ce.conditions.collect{ |c,r| 223: sql << "WHEN #{literal(c)} THEN #{literal(r)} " 224: } 225: sql << "ELSE #{literal(ce.default)} END)" 226: end
SQL fragment for complex expressions
# File lib/sequel/dataset/sql.rb, line 239 239: def complex_expression_sql(op, args) 240: case op 241: when *IS_OPERATORS 242: r = args.at(1) 243: if r.nil? || supports_is_true? 244: raise(InvalidOperation, 'Invalid argument used for IS operator') unless v = IS_LITERALS[r] 245: "(#{literal(args.at(0))} #{op} #{v})" 246: elsif op == :IS 247: complex_expression_sql("=""=", args) 248: else 249: complex_expression_sql(:OR, [SQL::BooleanExpression.new("!=""!=", *args), SQL::BooleanExpression.new(:IS, args.at(0), nil)]) 250: end 251: when :IN, "NOT IN""NOT IN" 252: cols = args.at(0) 253: vals = args.at(1) 254: col_array = true if cols.is_a?(Array) 255: if vals.is_a?(Array) 256: val_array = true 257: empty_val_array = vals == [] 258: end 259: if col_array 260: if empty_val_array 261: if op == :IN 262: literal(SQL::BooleanExpression.from_value_pairs(cols.to_a.map{|x| [x, x]}, :AND, true)) 263: else 264: literal(1=>1) 265: end 266: elsif !supports_multiple_column_in? 267: if val_array 268: expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) 269: literal(op == :IN ? expr : ~expr) 270: else 271: old_vals = vals 272: vals = vals.naked if vals.is_a?(Sequel::Dataset) 273: vals = vals.to_a 274: val_cols = old_vals.columns 275: complex_expression_sql(op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) 276: end 277: else 278: # If the columns and values are both arrays, use array_sql instead of 279: # literal so that if values is an array of two element arrays, it 280: # will be treated as a value list instead of a condition specifier. 281: "(#{literal(cols)} #{op} #{val_array ? array_sql(vals) : literal(vals)})" 282: end 283: else 284: if empty_val_array 285: if op == :IN 286: literal(SQL::BooleanExpression.from_value_pairs([[cols, cols]], :AND, true)) 287: else 288: literal(1=>1) 289: end 290: else 291: "(#{literal(cols)} #{op} #{literal(vals)})" 292: end 293: end 294: when *TWO_ARITY_OPERATORS 295: "(#{literal(args.at(0))} #{op} #{literal(args.at(1))})" 296: when *N_ARITY_OPERATORS 297: "(#{args.collect{|a| literal(a)}.join(" #{op} ")})" 298: when :NOT 299: "NOT #{literal(args.at(0))}" 300: when :NOOP 301: literal(args.at(0)) 302: when 'B~''B~' 303: "~#{literal(args.at(0))}" 304: else 305: raise(InvalidOperation, "invalid operator #{op}") 306: end 307: end
SQL fragment specifying a JOIN clause without ON or USING.
# File lib/sequel/dataset/sql.rb, line 321 321: def join_clause_sql(jc) 322: table = jc.table 323: table_alias = jc.table_alias 324: table_alias = nil if table == table_alias 325: tref = table_ref(table) 326: " #{join_type_sql(jc.join_type)} #{table_alias ? as_sql(tref, table_alias) : tref}" 327: end
SQL fragment for the ordered expression, used in the ORDER BY clause.
# File lib/sequel/dataset/sql.rb, line 346 346: def ordered_expression_sql(oe) 347: s = "#{literal(oe.expression)} #{oe.descending ? 'DESC' : 'ASC'}" 348: case oe.nulls 349: when :first 350: "#{s} NULLS FIRST" 351: when :last 352: "#{s} NULLS LAST" 353: else 354: s 355: end 356: end
SQL fragment for a literal string with placeholders
# File lib/sequel/dataset/sql.rb, line 359 359: def placeholder_literal_string_sql(pls) 360: args = pls.args 361: s = if args.is_a?(Hash) 362: re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ 363: pls.str.gsub(re){literal(args[$1.to_sym])} 364: else 365: i = -1 366: pls.str.gsub(QUESTION_MARK){literal(args.at(i+=1))} 367: end 368: s = "(#{s})" if pls.parens 369: s 370: end
SQL fragment for the qualifed identifier, specifying a table and a column (or schema and table).
# File lib/sequel/dataset/sql.rb, line 374 374: def qualified_identifier_sql(qcr) 375: [qcr.table, qcr.column].map{|x| [SQL::QualifiedIdentifier, SQL::Identifier, Symbol].any?{|c| x.is_a?(c)} ? literal(x) : quote_identifier(x)}.join('.') 376: end
Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb, line 381 381: def quote_identifier(name) 382: return name if name.is_a?(LiteralString) 383: name = name.value if name.is_a?(SQL::Identifier) 384: name = input_identifier(name) 385: name = quoted_identifier(name) if quote_identifiers? 386: name 387: end
Separates the schema from the table and returns a string with them quoted (if quoting identifiers)
# File lib/sequel/dataset/sql.rb, line 391 391: def quote_schema_table(table) 392: schema, table = schema_and_table(table) 393: "#{"#{quote_identifier(schema)}." if schema}#{quote_identifier(table)}" 394: end
This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb, line 399 399: def quoted_identifier(name) 400: "\"#{name.to_s.gsub('"', '""')}\"" 401: end
Split the schema information from the table
# File lib/sequel/dataset/sql.rb, line 404 404: def schema_and_table(table_name) 405: sch = db.default_schema if db 406: case table_name 407: when Symbol 408: s, t, a = split_symbol(table_name) 409: [s||sch, t] 410: when SQL::QualifiedIdentifier 411: [table_name.table, table_name.column] 412: when SQL::Identifier 413: [sch, table_name.value] 414: when String 415: [sch, table_name] 416: else 417: raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' 418: end 419: end
The SQL fragment for the given window‘s options.
# File lib/sequel/dataset/sql.rb, line 427 427: def window_sql(opts) 428: raise(Error, 'This dataset does not support window functions') unless supports_window_functions? 429: window = literal(opts[:window]) if opts[:window] 430: partition = "PARTITION BY #{expression_list(Array(opts[:partition]))}" if opts[:partition] 431: order = "ORDER BY #{expression_list(Array(opts[:order]))}" if opts[:order] 432: frame = case opts[:frame] 433: when nil 434: nil 435: when :all 436: "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING" 437: when :rows 438: "ROWS UNBOUNDED PRECEDING" 439: when String 440: opts[:frame] 441: else 442: raise Error, "invalid window frame clause, should be :all, :rows, a string, or nil" 443: end 444: "(#{[window, partition, order, frame].compact.join(' ')})" 445: end
On some adapters, these use native prepared statements and bound variables, on others support is emulated. For details, see the "Prepared Statements/Bound Variables" guide.
PREPARED_ARG_PLACEHOLDER | = | LiteralString.new('?').freeze |
Set the bind variables to use for the call. If bind variables have already been set for this dataset, they are updated with the contents of bind_vars.
DB[:table].filter(:id=>:$id).bind(:id=>1).call(:first) # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # => {:id=>1}
# File lib/sequel/dataset/prepared_statements.rb, line 198 198: def bind(bind_vars={}) 199: clone(:bind_vars=>@opts[:bind_vars] ? @opts[:bind_vars].merge(bind_vars) : bind_vars) 200: end
For the given type (:select, :insert, :update, or :delete), run the sql with the bind variables specified in the hash. values is a hash of passed to insert or update (if one of those types is used), which may contain placeholders.
DB[:table].filter(:id=>:$id).call(:first, :id=>1) # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # => {:id=>1}
# File lib/sequel/dataset/prepared_statements.rb, line 211 211: def call(type, bind_variables={}, *values, &block) 212: prepare(type, nil, *values).call(bind_variables, &block) 213: end
Prepare an SQL statement for later execution. This returns a clone of the dataset extended with PreparedStatementMethods, on which you can call call with the hash of bind variables to do substitution. The prepared statement is also stored in the associated database. The following usage is identical:
ps = DB[:table].filter(:name=>:$name).prepare(:first, :select_by_name) ps.call(:name=>'Blah') # SELECT * FROM table WHERE name = ? -- ('Blah') # => {:id=>1, :name=>'Blah'} DB.call(:select_by_name, :name=>'Blah') # Same thing
# File lib/sequel/dataset/prepared_statements.rb, line 228 228: def prepare(type, name=nil, *values) 229: ps = to_prepared_statement(type, values) 230: db.prepared_statements[name] = ps if name 231: ps 232: end
Return a cloned copy of the current dataset extended with PreparedStatementMethods, setting the type and modify values.
# File lib/sequel/dataset/prepared_statements.rb, line 238 238: def to_prepared_statement(type, values=nil) 239: ps = bind 240: ps.extend(PreparedStatementMethods) 241: ps.prepared_type = type 242: ps.prepared_modify_values = values 243: ps 244: end
These methods all execute the dataset‘s SQL on the database. They don‘t return modified datasets, so if used in a method chain they should be the last method called.
ACTION_METHODS | = | %w'<< [] []= all avg count columns columns! delete each empty? fetch_rows first get import insert insert_multiple interval last map max min multi_insert range select_hash select_map select_order_map set single_record single_value sum to_csv to_hash truncate update'.map{|x| x.to_sym} | Action methods defined by Sequel that execute code on the database. |
Returns the first record matching the conditions. Examples:
DB[:table][:id=>1] # SELECT * FROM table WHERE (id = 1) LIMIT 1 # => {:id=1}
# File lib/sequel/dataset/actions.rb, line 26 26: def [](*conditions) 27: raise(Error, ARRAY_ACCESS_ERROR_MSG) if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 28: first(*conditions) 29: end
Update all records matching the conditions with the values specified. Returns the number of rows affected.
DB[:table][:id=>1] = {:id=>2} # UPDATE table SET id = 2 WHERE id = 1 # => 1 # number of rows affected
# File lib/sequel/dataset/actions.rb, line 36 36: def []=(conditions, values) 37: filter(conditions).update(values) 38: end
Returns an array with all records in the dataset. If a block is given, the array is iterated over after all items have been loaded.
DB[:table].all # SELECT * FROM table # => [{:id=>1, ...}, {:id=>2, ...}, ...] # Iterate over all rows in the table DB[:table].all{|row| p row}
# File lib/sequel/dataset/actions.rb, line 48 48: def all(&block) 49: a = [] 50: each{|r| a << r} 51: post_load(a) 52: a.each(&block) if block 53: a 54: end
Returns the average value for the given column.
DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1 # => 3
# File lib/sequel/dataset/actions.rb, line 60 60: def avg(column) 61: aggregate_dataset.get{avg(column)} 62: end
Returns the columns in the result set in order as an array of symbols. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to retrieve a single row in order to get the columns.
If you are looking for all columns for a single table and maybe some information about each column (e.g. database type), see Database#schema.
DB[:table].columns # => [:id, :name]
# File lib/sequel/dataset/actions.rb, line 73 73: def columns 74: return @columns if @columns 75: ds = unfiltered.unordered.clone(:distinct => nil, :limit => 1) 76: ds.each{break} 77: @columns = ds.instance_variable_get(:@columns) 78: @columns || [] 79: end
Returns the number of records in the dataset.
DB[:table].count # SELECT COUNT(*) AS count FROM table LIMIT 1 # => 3
# File lib/sequel/dataset/actions.rb, line 95 95: def count 96: aggregate_dataset.get{COUNT(:*){}.as(count)}.to_i 97: end
Deletes the records in the dataset. The returned value should be number of records deleted, but that is adapter dependent.
DB[:table].delete # DELETE * FROM table # => 3
# File lib/sequel/dataset/actions.rb, line 104 104: def delete 105: execute_dui(delete_sql) 106: end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
DB[:table].each{|row| p row} # SELECT * FROM table
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, you should use all instead of each for the outer queries, or use a separate thread or shard inside each:
# File lib/sequel/dataset/actions.rb, line 117 117: def each(&block) 118: if @opts[:graph] 119: graph_each(&block) 120: elsif row_proc = @row_proc 121: fetch_rows(select_sql){|r| yield row_proc.call(r)} 122: else 123: fetch_rows(select_sql, &block) 124: end 125: self 126: end
Returns true if no records exist in the dataset, false otherwise
DB[:table].empty? # SELECT 1 FROM table LIMIT 1 # => false
# File lib/sequel/dataset/actions.rb, line 132 132: def empty? 133: get(1).nil? 134: end
Executes a select query and fetches records, yielding each record to the supplied block. The yielded records should be hashes with symbol keys. This method should probably should not be called by user code, use each instead.
# File lib/sequel/dataset/actions.rb, line 140 140: def fetch_rows(sql) 141: raise NotImplemented, NOTIMPL_MSG 142: end
If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If no argument is passed, it returns the first matching record. If any other type of argument(s) is passed, it is given to filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything. Examples:
DB[:table].first # SELECT * FROM table LIMIT 1 # => {:id=>7} DB[:table].first(2) # SELECT * FROM table LIMIT 2 # => [{:id=>6}, {:id=>4}] DB[:table].first(:id=>2) # SELECT * FROM table WHERE (id = 2) LIMIT 1 # => {:id=>2} DB[:table].first("id = 3") # SELECT * FROM table WHERE (id = 3) LIMIT 1 # => {:id=>3} DB[:table].first("id = ?", 4) # SELECT * FROM table WHERE (id = 4) LIMIT 1 # => {:id=>4} DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1 # => {:id=>5} DB[:table].first("id > ?", 4){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1 # => {:id=>5} DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2 # => [{:id=>1}]
# File lib/sequel/dataset/actions.rb, line 174 174: def first(*args, &block) 175: ds = block ? filter(&block) : self 176: 177: if args.empty? 178: ds.single_record 179: else 180: args = (args.size == 1) ? args.first : args 181: if Integer === args 182: ds.limit(args).all 183: else 184: ds.filter(args).single_record 185: end 186: end 187: end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
DB[:table].get(:id) # SELECT id FROM table LIMIT 1 # => 3 ds.get{sum(id)} # SELECT sum(id) FROM table LIMIT 1 # => 6
# File lib/sequel/dataset/actions.rb, line 197 197: def get(column=nil, &block) 198: if column 199: raise(Error, ARG_BLOCK_ERROR_MSG) if block 200: select(column).single_value 201: else 202: select(&block).single_value 203: end 204: end
Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction.
This method is called with a columns array and an array of value arrays:
DB[:table].import([:x, :y], [[1, 2], [3, 4]]) # INSERT INTO table (x, y) VALUES (1, 2) # INSERT INTO table (x, y) VALUES (3, 4)
This method also accepts a dataset instead of an array of value arrays:
DB[:table].import([:x, :y], DB[:table2].select(:a, :b)) # INSERT INTO table (x, y) SELECT a, b FROM table2
The method also accepts a :slice or :commit_every option that specifies the number of records to insert per transaction. This is useful especially when inserting a large number of records, e.g.:
# this will commit every 50 records dataset.import([:x, :y], [[1, 2], [3, 4], ...], :slice => 50)
# File lib/sequel/dataset/actions.rb, line 228 228: def import(columns, values, opts={}) 229: return @db.transaction{insert(columns, values)} if values.is_a?(Dataset) 230: 231: return if values.empty? 232: raise(Error, IMPORT_ERROR_MSG) if columns.empty? 233: 234: if slice_size = opts[:commit_every] || opts[:slice] 235: offset = 0 236: loop do 237: @db.transaction(opts){multi_insert_sql(columns, values[offset, slice_size]).each{|st| execute_dui(st)}} 238: offset += slice_size 239: break if offset >= values.length 240: end 241: else 242: statements = multi_insert_sql(columns, values) 243: @db.transaction{statements.each{|st| execute_dui(st)}} 244: end 245: end
Inserts values into the associated table. The returned value is generally the value of the primary key for the inserted row, but that is adapter dependent.
insert handles a number of different argument formats:
DB[:items].insert # INSERT INTO items DEFAULT VALUES
DB[:items].insert({}) # INSERT INTO items DEFAULT VALUES
DB[:items].insert() # INSERT INTO items VALUES (1, 2, 3)
DB[:items].insert([:a, :b], [1,2]) # INSERT INTO items (a, b) VALUES (1, 2)
DB[:items].insert(:a => 1, :b => 2) # INSERT INTO items (a, b) VALUES (1, 2)
DB[:items].insert(DB) # INSERT INTO items SELECT * FROM old_items
DB[:items].insert([:a, :b], DB[:old_items]) # INSERT INTO items (a, b) SELECT * FROM old_items
# File lib/sequel/dataset/actions.rb, line 280 280: def insert(*values) 281: execute_insert(insert_sql(*values)) 282: end
Inserts multiple values. If a block is given it is invoked for each item in the given array before inserting it. See multi_insert as a possible faster version that inserts multiple records in one SQL statement.
DB[:table].insert_multiple([{:x=>1}, {:x=>2}]) # INSERT INTO table (x) VALUES (1) # INSERT INTO table (x) VALUES (2) DB[:table].insert_multiple([{:x=>1}, {:x=>2}]){|row| row[:y] = row[:x] * 2} # INSERT INTO table (x, y) VALUES (1, 2) # INSERT INTO table (x, y) VALUES (2, 4)
# File lib/sequel/dataset/actions.rb, line 296 296: def insert_multiple(array, &block) 297: if block 298: array.each {|i| insert(block[i])} 299: else 300: array.each {|i| insert(i)} 301: end 302: end
Reverses the order and then runs first. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error.
DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1 # => {:id=>10} DB[:table].order(:id.desc).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2 # => [{:id=>1}, {:id=>2}]
# File lib/sequel/dataset/actions.rb, line 323 323: def last(*args, &block) 324: raise(Error, 'No order specified') unless @opts[:order] 325: reverse.first(*args, &block) 326: end
Maps column values for each record in the dataset (if a column name is given), or performs the stock mapping functionality of Enumerable otherwise. Raises an Error if both an argument and block are given.
DB[:table].map(:id) # SELECT * FROM table # => [1, 2, 3, ...] DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table # => [2, 4, 6, ...]
# File lib/sequel/dataset/actions.rb, line 337 337: def map(column=nil, &block) 338: if column 339: raise(Error, ARG_BLOCK_ERROR_MSG) if block 340: super(){|r| r[column]} 341: else 342: super(&block) 343: end 344: end
Returns the maximum value for the given column.
DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1 # => 10
# File lib/sequel/dataset/actions.rb, line 350 350: def max(column) 351: aggregate_dataset.get{max(column)} 352: end
Returns the minimum value for the given column.
DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb, line 358 358: def min(column) 359: aggregate_dataset.get{min(column)} 360: end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
DB[:table].multi_insert([{:x => 1}, {:x => 2}]) # INSERT INTO table (x) VALUES (1) # INSERT INTO table (x) VALUES (2)
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
You can also use the :slice or :commit_every option that import accepts.
# File lib/sequel/dataset/actions.rb, line 374 374: def multi_insert(hashes, opts={}) 375: return if hashes.empty? 376: columns = hashes.first.keys 377: import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) 378: end
Returns a Range instance made from the minimum and maximum values for the given column.
DB[:table].range(:id) # SELECT max(id) AS v1, min(id) AS v2 FROM table LIMIT 1 # => 1..10
# File lib/sequel/dataset/actions.rb, line 385 385: def range(column) 386: if r = aggregate_dataset.select{[min(column).as(v1), max(column).as(v2)]}.first 387: (r[:v1]..r[:v2]) 388: end 389: end
Returns a hash with key_column values as keys and value_column values as values. Similar to to_hash, but only selects the two columns.
DB[:table].select_hash(:id, :name) # SELECT id, name FROM table # => {1=>'a', 2=>'b', ...}
# File lib/sequel/dataset/actions.rb, line 396 396: def select_hash(key_column, value_column) 397: select(key_column, value_column).to_hash(hash_key_symbol(key_column), hash_key_symbol(value_column)) 398: end
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset. If you give a block argument that returns an array with multiple entries, the contents of the resulting array are undefined.
DB[:table].select_map(:id) # SELECT id FROM table # => [3, 5, 8, 1, ...] DB[:table].select_map{abs(id)} # SELECT abs(id) FROM table # => [3, 5, 8, 1, ...]
# File lib/sequel/dataset/actions.rb, line 410 410: def select_map(column=nil, &block) 411: ds = naked.ungraphed 412: ds = if column 413: raise(Error, ARG_BLOCK_ERROR_MSG) if block 414: ds.select(column) 415: else 416: ds.select(&block) 417: end 418: ds.map{|r| r.values.first} 419: end
The same as select_map, but in addition orders the array by the column.
DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id # => [1, 2, 3, 4, ...] DB[:table].select_order_map{abs(id)} # SELECT abs(id) FROM table ORDER BY abs(id) # => [1, 2, 3, 4, ...]
# File lib/sequel/dataset/actions.rb, line 428 428: def select_order_map(column=nil, &block) 429: ds = naked.ungraphed 430: ds = if column 431: raise(Error, ARG_BLOCK_ERROR_MSG) if block 432: ds.select(column).order(unaliased_identifier(column)) 433: else 434: ds.select(&block).order(&block) 435: end 436: ds.map{|r| r.values.first} 437: end
Returns a string in CSV format containing the dataset records. By default the CSV representation includes the column titles in the first line. You can turn that off by passing false as the include_column_titles argument.
This does not use a CSV library or handle quoting of values in any way. If any values in any of the rows could include commas or line endings, you shouldn‘t use this.
puts DB[:table].to_csv # SELECT * FROM table # id,name # 1,Jim # 2,Bob
# File lib/sequel/dataset/actions.rb, line 483 483: def to_csv(include_column_titles = true) 484: n = naked 485: cols = n.columns 486: csv = '' 487: csv << "#{cols.join(COMMA_SEPARATOR)}\r\n" if include_column_titles 488: n.each{|r| csv << "#{cols.collect{|c| r[c]}.join(COMMA_SEPARATOR)}\r\n"} 489: csv 490: end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash(:id, :name) # SELECT * FROM table # {1=>'Jim', 2=>'Bob', ...} DB[:table].to_hash(:id) # SELECT * FROM table # {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...}
# File lib/sequel/dataset/actions.rb, line 502 502: def to_hash(key_column, value_column = nil) 503: inject({}) do |m, r| 504: m[r[key_column]] = value_column ? r[value_column] : r 505: m 506: end 507: end
Truncates the dataset. Returns nil.
DB[:table].truncate # TRUNCATE table # => nil
# File lib/sequel/dataset/actions.rb, line 513 513: def truncate 514: execute_ddl(truncate_sql) 515: end
Updates values for the dataset. The returned value is generally the number of rows updated, but that is adapter dependent. values should a hash where the keys are columns to set and values are the values to which to set the columns.
DB[:table].update(:x=>nil) # UPDATE table SET x = NULL # => 10 DB[:table].update(:x=>:x+1, :y=>0) # UPDATE table SET x = (x + 1), :y = 0 # => 10
# File lib/sequel/dataset/actions.rb, line 527 527: def update(values={}) 528: execute_dui(update_sql(values)) 529: end
MUTATION_METHODS | = | QUERY_METHODS | All methods that should have a ! method added that modifies the receiver. |
identifier_input_method | [RW] | Set the method to call on identifiers going into the database for this dataset |
identifier_output_method | [RW] | Set the method to call on identifiers coming the database for this dataset |
quote_identifiers | [W] | Whether to quote identifiers for this dataset |
row_proc | [RW] | The row_proc for this database, should be a Proc that takes a single hash argument and returns the object you want each to return. |
Setup mutation (e.g. filter!) methods. These operate the same as the non-! methods, but replace the options of the current dataset with the options of the resulting dataset.
# File lib/sequel/dataset/mutation.rb, line 14 14: def self.def_mutation_method(*meths) 15: meths.each do |meth| 16: class_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end", __FILE__, __LINE__) 17: end 18: end
These methods all return booleans, with most describing whether or not the dataset supports a feature.
WITH_SUPPORTED | = | :select_with_sql | Method used to check if WITH is supported |
Whether this dataset quotes identifiers.
# File lib/sequel/dataset/features.rb, line 13 13: def quote_identifiers? 14: @quote_identifiers 15: end
Whether the dataset supports common table expressions (the WITH clause).
# File lib/sequel/dataset/features.rb, line 31 31: def supports_cte? 32: select_clause_methods.include?(WITH_SUPPORTED) 33: end
Whether the dataset supports the DISTINCT ON clause, false by default.
# File lib/sequel/dataset/features.rb, line 36 36: def supports_distinct_on? 37: false 38: end
Whether the dataset supports the IS TRUE syntax.
# File lib/sequel/dataset/features.rb, line 57 57: def supports_is_true? 58: true 59: end
Whether the dataset supports the JOIN table USING (column1, …) syntax.
# File lib/sequel/dataset/features.rb, line 62 62: def supports_join_using? 63: true 64: end
Whether modifying joined datasets is supported.
# File lib/sequel/dataset/features.rb, line 67 67: def supports_modifying_joins? 68: false 69: end
Dataset graphing changes the dataset to yield hashes where keys are table name symbols and values are hashes representing the columns related to that table. All of these methods return modified copies of the receiver.
Adds the given graph aliases to the list of graph aliases to use, unlike set_graph_aliases, which replaces the list (the equivalent of select_more when graphing). See set_graph_aliases.
DB[:table].add_graph_aliases(:some_alias=>[:table, :column]) # SELECT ..., table.column AS some_alias # => {:table=>{:column=>some_alias_value, ...}, ...}
# File lib/sequel/dataset/graph.rb, line 17 17: def add_graph_aliases(graph_aliases) 18: columns, graph_aliases = graph_alias_columns(graph_aliases) 19: ds = select_more(*columns) 20: ds.opts[:graph_aliases] = (ds.opts[:graph_aliases] || (ds.opts[:graph][:column_aliases] rescue {}) || {}).merge(graph_aliases) 21: ds 22: end
Allows you to join multiple datasets/tables and have the result set split into component tables.
This differs from the usual usage of join, which returns the result set as a single hash. For example:
# CREATE TABLE artists (id INTEGER, name TEXT); # CREATE TABLE albums (id INTEGER, name TEXT, artist_id INTEGER); DB[:artists].left_outer_join(:albums, :artist_id=>:id).first #=> {:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id} DB[:artists].graph(:albums, :artist_id=>:id).first #=> {:artists=>{:id=>artists.id, :name=>artists.name}, :albums=>{:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id}}
Using a join such as left_outer_join, the attribute names that are shared between the tables are combined in the single return hash. You can get around that by using select with correct aliases for all of the columns, but it is simpler to use graph and have the result set split for you. In addition, graph respects any row_proc of the current dataset and the datasets you use with graph.
If you are graphing a table and all columns for that table are nil, this indicates that no matching rows existed in the table, so graph will return nil instead of a hash with all nil values:
# If the artist doesn't have any albums DB[:artists].graph(:albums, :artist_id=>:id).first => {:artists=>{:id=>artists.id, :name=>artists.name}, :albums=>nil}
Arguments:
dataset : | Can be a symbol (specifying a table), another dataset, or an object that responds to dataset and returns a symbol or a dataset |
join_conditions : | Any condition(s) allowed by join_table. |
block : | A block that is passed to join_table. |
Options:
:from_self_alias : | The alias to use when the receiver is not a graphed dataset but it contains multiple FROM tables or a JOIN. In this case, the receiver is wrapped in a from_self before graphing, and this option determines the alias to use. |
:implicit_qualifier : | The qualifier of implicit conditions, see join_table. |
:join_type : | The type of join to use (passed to join_table). Defaults to :left_outer. |
:select : | An array of columns to select. When not used, selects all columns in the given dataset. When set to false, selects no columns and is like simply joining the tables, though graph keeps some metadata about the join that makes it important to use graph instead of join_table. |
:table_alias : | The alias to use for the table. If not specified, doesn‘t alias the table. You will get an error if the the alias (or table) name is used more than once. |
# File lib/sequel/dataset/graph.rb, line 74 74: def graph(dataset, join_conditions = nil, options = {}, &block) 75: # Allow the use of a model, dataset, or symbol as the first argument 76: # Find the table name/dataset based on the argument 77: dataset = dataset.dataset if dataset.respond_to?(:dataset) 78: table_alias = options[:table_alias] 79: case dataset 80: when Symbol 81: table = dataset 82: dataset = @db[dataset] 83: table_alias ||= table 84: when ::Sequel::Dataset 85: if dataset.simple_select_all? 86: table = dataset.opts[:from].first 87: table_alias ||= table 88: else 89: table = dataset 90: table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1) 91: end 92: else 93: raise Error, "The dataset argument should be a symbol, dataset, or model" 94: end 95: 96: # Raise Sequel::Error with explanation that the table alias has been used 97: raise_alias_error = lambda do 98: raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ 99: "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") 100: end 101: 102: # Only allow table aliases that haven't been used 103: raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) 104: 105: # Use a from_self if this is already a joined table 106: ds = (!@opts[:graph] && (@opts[:from].length > 1 || @opts[:join])) ? from_self(:alias=>options[:from_self_alias] || first_source) : self 107: 108: # Join the table early in order to avoid cloning the dataset twice 109: ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias, :implicit_qualifier=>options[:implicit_qualifier], &block) 110: opts = ds.opts 111: 112: # Whether to include the table in the result set 113: add_table = options[:select] == false ? false : true 114: # Whether to add the columns to the list of column aliases 115: add_columns = !ds.opts.include?(:graph_aliases) 116: 117: # Setup the initial graph data structure if it doesn't exist 118: unless graph = opts[:graph] 119: master = alias_symbol(ds.first_source_alias) 120: raise_alias_error.call if master == table_alias 121: # Master hash storing all .graph related information 122: graph = opts[:graph] = {} 123: # Associates column aliases back to tables and columns 124: column_aliases = graph[:column_aliases] = {} 125: # Associates table alias (the master is never aliased) 126: table_aliases = graph[:table_aliases] = {master=>self} 127: # Keep track of the alias numbers used 128: ca_num = graph[:column_alias_num] = Hash.new(0) 129: # All columns in the master table are never 130: # aliased, but are not included if set_graph_aliases 131: # has been used. 132: if add_columns 133: select = opts[:select] = [] 134: columns.each do |column| 135: column_aliases[column] = [master, column] 136: select.push(SQL::QualifiedIdentifier.new(master, column)) 137: end 138: end 139: end 140: 141: # Add the table alias to the list of aliases 142: # Even if it isn't been used in the result set, 143: # we add a key for it with a nil value so we can check if it 144: # is used more than once 145: table_aliases = graph[:table_aliases] 146: table_aliases[table_alias] = add_table ? dataset : nil 147: 148: # Add the columns to the selection unless we are ignoring them 149: if add_table && add_columns 150: select = opts[:select] 151: column_aliases = graph[:column_aliases] 152: ca_num = graph[:column_alias_num] 153: # Which columns to add to the result set 154: cols = options[:select] || dataset.columns 155: # If the column hasn't been used yet, don't alias it. 156: # If it has been used, try table_column. 157: # If that has been used, try table_column_N 158: # using the next value of N that we know hasn't been 159: # used 160: cols.each do |column| 161: col_alias, identifier = if column_aliases[column] 162: column_alias = "#{table_alias}_#{column}""#{table_alias}_#{column}" 163: if column_aliases[column_alias] 164: column_alias_num = ca_num[column_alias] 165: column_alias = "#{column_alias}_#{column_alias_num}""#{column_alias}_#{column_alias_num}" 166: ca_num[column_alias] += 1 167: end 168: [column_alias, SQL::QualifiedIdentifier.new(table_alias, column).as(column_alias)] 169: else 170: [column, SQL::QualifiedIdentifier.new(table_alias, column)] 171: end 172: column_aliases[col_alias] = [table_alias, column] 173: select.push(identifier) 174: end 175: end 176: ds 177: end
This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of select for a graphed dataset, and must be used instead of select whenever graphing is used.
graph_aliases : | Should be a hash with keys being symbols of column aliases, and values being either symbols or arrays with one to three elements. If the value is a symbol, it is assumed to be the same as a one element array containing that symbol. The first element of the array should be the table alias symbol. The second should be the actual column name symbol. If the array only has a single element the column name symbol will be assumed to be the same as the corresponding hash key. If the array has a third element, it is used as the value returned, instead of table_alias.column_name. |
DB[:artists].graph(:albums, :artist_id=>:id). set_graph_aliases(:name=>:artists, :album_name=>[:albums, :name], :forty_two=>[:albums, :fourtwo, 42]).first # SELECT artists.name, albums.name AS album_name, 42 AS forty_two ... # => {:artists=>{:name=>artists.name}, :albums=>{:name=>albums.name, :fourtwo=>42}}
# File lib/sequel/dataset/graph.rb, line 203 203: def set_graph_aliases(graph_aliases) 204: columns, graph_aliases = graph_alias_columns(graph_aliases) 205: ds = select(*columns) 206: ds.opts[:graph_aliases] = graph_aliases 207: ds 208: end
These methods don‘t fit cleanly into another section.
NOTIMPL_MSG | = | "This method must be overridden in Sequel adapters".freeze |
ARRAY_ACCESS_ERROR_MSG | = | 'You cannot call Dataset#[] with an integer or with no arguments.'.freeze |
ARG_BLOCK_ERROR_MSG | = | 'Must use either an argument or a block, not both'.freeze |
IMPORT_ERROR_MSG | = | 'Using Sequel::Dataset#import an empty column array is not allowed'.freeze |
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adaptor provides a subclass of Sequel::Dataset, and has the Database#dataset method return an instance of that subclass.
# File lib/sequel/dataset/misc.rb, line 28 28: def initialize(db, opts = nil) 29: @db = db 30: @quote_identifiers = db.quote_identifiers? if db.respond_to?(:quote_identifiers?) 31: @identifier_input_method = db.identifier_input_method if db.respond_to?(:identifier_input_method) 32: @identifier_output_method = db.identifier_output_method if db.respond_to?(:identifier_output_method) 33: @opts = opts || {} 34: @row_proc = nil 35: end
Return the dataset as an aliased expression with the given alias. You can use this as a FROM or JOIN dataset, or as a column if this dataset returns a single row and column.
DB.from(DB[:table].as(:b)) # SELECT * FROM (SELECT * FROM table) AS b
# File lib/sequel/dataset/misc.rb, line 53 53: def as(aliaz) 54: ::Sequel::SQL::AliasedExpression.new(self, aliaz) 55: end
Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:
DB[:configs].where(:key=>'setting').each_server{|ds| ds.update(:value=>'new_value')}
# File lib/sequel/dataset/misc.rb, line 62 62: def each_server 63: db.servers.each{|s| yield server(s)} 64: end
Alias of first_source_alias
# File lib/sequel/dataset/misc.rb, line 67 67: def first_source 68: first_source_alias 69: end
The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an Error. If the table is aliased, returns the aliased name.
DB[:table].first_source_alias # => :table DB[:table___t].first_source_alias # => :t
# File lib/sequel/dataset/misc.rb, line 79 79: def first_source_alias 80: source = @opts[:from] 81: if source.nil? || source.empty? 82: raise Error, 'No source specified for query' 83: end 84: case s = source.first 85: when SQL::AliasedExpression 86: s.aliaz 87: when Symbol 88: sch, table, aliaz = split_symbol(s) 89: aliaz ? aliaz.to_sym : s 90: else 91: s 92: end 93: end
The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an error. If the table is aliased, returns the original table, not the alias
DB[:table].first_source_alias # => :table DB[:table___t].first_source_alias # => :table
# File lib/sequel/dataset/misc.rb, line 104 104: def first_source_table 105: source = @opts[:from] 106: if source.nil? || source.empty? 107: raise Error, 'No source specified for query' 108: end 109: case s = source.first 110: when SQL::AliasedExpression 111: s.expression 112: when Symbol 113: sch, table, aliaz = split_symbol(s) 114: aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s 115: else 116: s 117: end 118: end
Splits a possible implicit alias in C, handling both SQL::AliasedExpressions and Symbols. Returns an array of two elements, with the first being the main expression, and the second being the alias.
# File lib/sequel/dataset/misc.rb, line 135 135: def split_alias(c) 136: case c 137: when Symbol 138: c_table, column, aliaz = split_symbol(c) 139: [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz] 140: when SQL::AliasedExpression 141: [c.expression, c.aliaz] 142: else 143: [c, nil] 144: end 145: end
Creates a unique table alias that hasn‘t already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with "_N" if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.
DB[:table].unused_table_alias(:t) # => :t DB[:table].unused_table_alias(:table) # => :table_0 DB[:table, :table_0].unused_table_alias(:table) # => :table_1
# File lib/sequel/dataset/misc.rb, line 162 162: def unused_table_alias(table_alias) 163: table_alias = alias_symbol(table_alias) 164: used_aliases = [] 165: used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] 166: used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] 167: if used_aliases.include?(table_alias) 168: i = 0 169: loop do 170: ta = "#{table_alias}_#{i}""#{table_alias}_#{i}" 171: return ta unless used_aliases.include?(ta) 172: i += 1 173: end 174: else 175: table_alias 176: end 177: end
These methods all return modified copies of the receiver.
COLUMN_CHANGE_OPTS | = | [:select, :sql, :from, :join].freeze | The dataset options that require the removal of cached columns if changed. | |
NON_SQL_OPTIONS | = | [:server, :defaults, :overrides, :graph, :eager_graph, :graph_aliases] | Which options don‘t affect the SQL generation. Used by simple_select_all? to determine if this is a simple SELECT * FROM table. | |
CONDITIONED_JOIN_TYPES | = | [:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left] | These symbols have _join methods created (e.g. inner_join) that call join_table with the symbol, passing along the arguments and block from the method call. | |
UNCONDITIONED_JOIN_TYPES | = | [:natural, :natural_left, :natural_right, :natural_full, :cross] | These symbols have _join methods created (e.g. natural_join) that call join_table with the symbol. They only accept a single table argument which is passed to join_table, and they raise an error if called with a block. | |
JOIN_METHODS | = | (CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table] | All methods that return modified datasets with a joined table added. | |
QUERY_METHODS | = | %w'add_graph_aliases and distinct except exclude filter for_update from from_self graph grep group group_and_count group_by having intersect invert limit lock_style naked or order order_append order_by order_more order_prepend paginate qualify query reverse reverse_order select select_all select_append select_more server set_defaults set_graph_aliases set_overrides unfiltered ungraphed ungrouped union unlimited unordered where with with_recursive with_sql'.collect{|x| x.to_sym} + JOIN_METHODS | Methods that return modified datasets |
Adds an further filter to an existing filter using AND. If no filter exists an error is raised. This method is identical to filter except it expects an existing filter.
DB[:table].filter(:a).and(:b) # SELECT * FROM table WHERE a AND b
# File lib/sequel/dataset/query.rb, line 43 43: def and(*cond, &block) 44: raise(InvalidOperation, "No existing filter found.") unless @opts[:having] || @opts[:where] 45: filter(*cond, &block) 46: end
Returns a new clone of the dataset with with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted. This method should generally not be called directly by user code.
# File lib/sequel/dataset/query.rb, line 52 52: def clone(opts = {}) 53: c = super() 54: c.opts = @opts.merge(opts) 55: c.instance_variable_set(:@columns, nil) if opts.keys.any?{|o| COLUMN_CHANGE_OPTS.include?(o)} 56: c 57: end
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. Raises an error if arguments are given and DISTINCT ON is not supported.
DB[:items].distinct # SQL: SELECT DISTINCT * FROM items DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id
# File lib/sequel/dataset/query.rb, line 68 68: def distinct(*args) 69: raise(InvalidOperation, "DISTINCT ON not supported") if !args.empty? && !supports_distinct_on? 70: clone(:distinct => args) 71: end
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound dataset returns all rows in the current dataset that are not in the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
:alias : | Use the given value as the from_self alias |
:all : | Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur |
:from_self : | Set to false to not wrap the returned dataset in a from_self, use with care. |
DB[:items].except(DB[:other_items]) # SELECT * FROM items EXCEPT SELECT * FROM other_items DB[:items].except(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items EXCEPT ALL SELECT * FROM other_items DB[:items].except(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 90 90: def except(dataset, opts={}) 91: opts = {:all=>opts} unless opts.is_a?(Hash) 92: raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? 93: raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? 94: compound_clone(:except, dataset, opts) 95: end
Performs the inverse of Dataset#filter. Note that if you have multiple filter conditions, this is not the same as a negation of all conditions.
DB[:items].exclude(:category => 'software') # SELECT * FROM items WHERE (category != 'software') DB[:items].exclude(:category => 'software', :id=>3) # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
# File lib/sequel/dataset/query.rb, line 105 105: def exclude(*cond, &block) 106: clause = (@opts[:having] ? :having : :where) 107: cond = cond.first if cond.size == 1 108: cond = filter_expr(cond, &block) 109: cond = SQL::BooleanExpression.invert(cond) 110: cond = SQL::BooleanExpression.new(:AND, @opts[clause], cond) if @opts[clause] 111: clone(clause => cond) 112: end
Returns a copy of the dataset with the given conditions imposed upon it. If the query already has a HAVING clause, then the conditions are imposed in the HAVING clause. If not, then they are imposed in the WHERE clause.
filter accepts the following argument types:
filter also takes a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the "Virtual Rows" guide
If both a block and regular argument are provided, they get ANDed together.
Examples:
DB[:items].filter(:id => 3) # SELECT * FROM items WHERE (id = 3) DB[:items].filter('price < ?', 100) # SELECT * FROM items WHERE price < 100 DB[:items].filter([[:id, (1,2,3)], [:id, 0..10]]) # SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10))) DB[:items].filter('price < 100') # SELECT * FROM items WHERE price < 100 DB[:items].filter(:active) # SELECT * FROM items WHERE :active DB[:items].filter{price < 100} # SELECT * FROM items WHERE (price < 100)
Multiple filter calls can be chained for scoping:
software = dataset.filter(:category => 'software').filter{price < 100} # SELECT * FROM items WHERE ((category = 'software') AND (price < 100))
See the the "Dataset Filtering" guide for more examples and details.
# File lib/sequel/dataset/query.rb, line 166 166: def filter(*cond, &block) 167: _filter(@opts[:having] ? :having : :where, *cond, &block) 168: end
Returns a copy of the dataset with the source changed. If no source is given, removes all tables. If multiple sources are given, it is the same as using a CROSS JOIN (cartesian product) between all tables.
DB[:items].from # SQL: SELECT * DB[:items].from(:blah) # SQL: SELECT * FROM blah DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo
# File lib/sequel/dataset/query.rb, line 184 184: def from(*source) 185: table_alias_num = 0 186: sources = [] 187: source.each do |s| 188: case s 189: when Hash 190: s.each{|k,v| sources << SQL::AliasedExpression.new(k,v)} 191: when Dataset 192: sources << SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) 193: when Symbol 194: sch, table, aliaz = split_symbol(s) 195: if aliaz 196: s = sch ? SQL::QualifiedIdentifier.new(sch.to_sym, table.to_sym) : SQL::Identifier.new(table.to_sym) 197: sources << SQL::AliasedExpression.new(s, aliaz.to_sym) 198: else 199: sources << s 200: end 201: else 202: sources << s 203: end 204: end 205: o = {:from=>sources.empty? ? nil : sources} 206: o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 207: clone(o) 208: end
Returns a dataset selecting from the current dataset. Supplying the :alias option controls the alias of the result.
ds = DB[:items].order(:name).select(:id, :name) # SELECT id,name FROM items ORDER BY name ds.from_self # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1 ds.from_self(:alias=>:foo) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo
# File lib/sequel/dataset/query.rb, line 221 221: def from_self(opts={}) 222: fs = {} 223: @opts.keys.each{|k| fs[k] = nil unless NON_SQL_OPTIONS.include?(k)} 224: clone(fs).from(opts[:alias] ? as(opts[:alias]) : self) 225: end
Match any of the columns to any of the patterns. The terms can be strings (which use LIKE) or regular expressions (which are only supported on MySQL and PostgreSQL). Note that the total number of pattern matches will be Array(columns).length * Array(terms).length, which could cause performance issues.
Options (all are boolean):
:all_columns : | All columns must be matched to any of the given patterns. |
:all_patterns : | All patterns must match at least one of the columns. |
:case_insensitive : | Use a case insensitive pattern match (the default is case sensitive if the database supports it). |
If both :all_columns and :all_patterns are true, all columns must match all patterns.
Examples:
dataset.grep(:a, '%test%') # SELECT * FROM items WHERE (a LIKE '%test%') dataset.grep([:a, :b], %w'%test% foo') # SELECT * FROM items WHERE ((a LIKE '%test%') OR (a LIKE 'foo') OR (b LIKE '%test%') OR (b LIKE 'foo')) dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true) # SELECT * FROM a WHERE (((a LIKE '%foo%') OR (b LIKE '%foo%')) AND ((a LIKE '%bar%') OR (b LIKE '%bar%'))) dataset.grep([:a, :b], %w'%foo% %bar%', :all_columns=>true) # SELECT * FROM a WHERE (((a LIKE '%foo%') OR (a LIKE '%bar%')) AND ((b LIKE '%foo%') OR (b LIKE '%bar%'))) dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true, :all_columns=>true) # SELECT * FROM a WHERE ((a LIKE '%foo%') AND (b LIKE '%foo%') AND (a LIKE '%bar%') AND (b LIKE '%bar%'))
# File lib/sequel/dataset/query.rb, line 258 258: def grep(columns, patterns, opts={}) 259: if opts[:all_patterns] 260: conds = Array(patterns).map do |pat| 261: SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)}) 262: end 263: filter(SQL::BooleanExpression.new(opts[:all_patterns] ? :AND : :OR, *conds)) 264: else 265: conds = Array(columns).map do |c| 266: SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)}) 267: end 268: filter(SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *conds)) 269: end 270: end
Returns a copy of the dataset with the results grouped by the value of the given columns.
DB[:items].group(:id) # SELECT * FROM items GROUP BY id DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name
# File lib/sequel/dataset/query.rb, line 277 277: def group(*columns) 278: clone(:group => (columns.compact.empty? ? nil : columns)) 279: end
Returns a dataset grouped by the given column with count by group. Column aliases may be supplied, and will be included in the select clause.
Examples:
DB[:items].group_and_count(:name).all # SELECT name, count(*) AS count FROM items GROUP BY name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count(:first_name, :last_name).all # SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name # => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...] DB[:items].group_and_count(:first_name___name).all # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name # => [{:name=>'a', :count=>1}, ...]
# File lib/sequel/dataset/query.rb, line 302 302: def group_and_count(*columns) 303: group(*columns.map{|c| unaliased_identifier(c)}).select(*(columns + [COUNT_OF_ALL_AS_COUNT])) 304: end
Returns a copy of the dataset with the HAVING conditions changed. See filter for argument types.
DB[:items].group(:sum).having(:sum=>10) # SELECT * FROM items GROUP BY sum HAVING (sum = 10)
# File lib/sequel/dataset/query.rb, line 310 310: def having(*cond, &block) 311: _filter(:having, *cond, &block) 312: end
Adds an INTERSECT clause using a second dataset object. An INTERSECT compound dataset returns all rows in both the current dataset and the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
:alias : | Use the given value as the from_self alias |
:all : | Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur |
:from_self : | Set to false to not wrap the returned dataset in a from_self, use with care. |
DB[:items].intersect(DB[:other_items]) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1 DB[:items].intersect(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items INTERSECT ALL SELECT * FROM other_items DB[:items].intersect(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 331 331: def intersect(dataset, opts={}) 332: opts = {:all=>opts} unless opts.is_a?(Hash) 333: raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? 334: raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? 335: compound_clone(:intersect, dataset, opts) 336: end
Inverts the current filter.
DB[:items].filter(:category => 'software').invert # SELECT * FROM items WHERE (category != 'software') DB[:items].filter(:category => 'software', :id=>3).invert # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
# File lib/sequel/dataset/query.rb, line 345 345: def invert 346: having, where = @opts[:having], @opts[:where] 347: raise(Error, "No current filter") unless having || where 348: o = {} 349: o[:having] = SQL::BooleanExpression.invert(having) if having 350: o[:where] = SQL::BooleanExpression.invert(where) if where 351: clone(o) 352: end
Alias of inner_join
# File lib/sequel/dataset/query.rb, line 355 355: def join(*args, &block) 356: inner_join(*args, &block) 357: end
Returns a joined dataset. Uses the following arguments:
# File lib/sequel/dataset/query.rb, line 389 389: def join_table(type, table, expr=nil, options={}, &block) 390: using_join = expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)} 391: if using_join && !supports_join_using? 392: h = {} 393: expr.each{|s| h[s] = s} 394: return join_table(type, table, h, options) 395: end 396: 397: case options 398: when Hash 399: table_alias = options[:table_alias] 400: last_alias = options[:implicit_qualifier] 401: when Symbol, String, SQL::Identifier 402: table_alias = options 403: last_alias = nil 404: else 405: raise Error, "invalid options format for join_table: #{options.inspect}" 406: end 407: 408: if Dataset === table 409: if table_alias.nil? 410: table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 411: table_alias = dataset_alias(table_alias_num) 412: end 413: table_name = table_alias 414: else 415: table = table.table_name if table.respond_to?(:table_name) 416: table, implicit_table_alias = split_alias(table) 417: table_alias ||= implicit_table_alias 418: table_name = table_alias || table 419: end 420: 421: join = if expr.nil? and !block 422: SQL::JoinClause.new(type, table, table_alias) 423: elsif using_join 424: raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block 425: SQL::JoinUsingClause.new(expr, type, table, table_alias) 426: else 427: last_alias ||= @opts[:last_joined_table] || first_source_alias 428: if Sequel.condition_specifier?(expr) 429: expr = expr.collect do |k, v| 430: k = qualified_column_name(k, table_name) if k.is_a?(Symbol) 431: v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) 432: [k,v] 433: end 434: expr = SQL::BooleanExpression.from_value_pairs(expr) 435: end 436: if block 437: expr2 = yield(table_name, last_alias, @opts[:join] || []) 438: expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 439: end 440: SQL::JoinOnClause.new(expr, type, table, table_alias) 441: end 442: 443: opts = {:join => (@opts[:join] || []) + [join], :last_joined_table => table_name} 444: opts[:num_dataset_sources] = table_alias_num if table_alias_num 445: clone(opts) 446: end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset. To use an offset without a limit, pass nil as the first argument.
DB[:items].limit(10) # SELECT * FROM items LIMIT 10 DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20 DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10 DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10 DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20
# File lib/sequel/dataset/query.rb, line 465 465: def limit(l, o = nil) 466: return from_self.limit(l, o) if @opts[:sql] 467: 468: if Range === l 469: o = l.first 470: l = l.last - l.first + (l.exclude_end? ? 0 : 1) 471: end 472: l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) 473: if l.is_a?(Integer) 474: raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 475: end 476: opts = {:limit => l} 477: if o 478: o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) 479: if o.is_a?(Integer) 480: raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 481: end 482: opts[:offset] = o 483: end 484: clone(opts) 485: end
Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. Otherwise, a symbol may be used for database independent locking. Currently :update is respected by most databases, and :share is supported by some.
DB[:items].lock_style('FOR SHARE') # SELECT * FROM items FOR SHARE
# File lib/sequel/dataset/query.rb, line 493 493: def lock_style(style) 494: clone(:lock => style) 495: end
Returns a cloned dataset without a row_proc.
ds = DB[:items] ds.row_proc = proc{|r| r.invert} ds.all # => [{2=>:id}] ds.naked.all # => [{:id=>2}]
# File lib/sequel/dataset/query.rb, line 503 503: def naked 504: ds = clone 505: ds.row_proc = nil 506: ds 507: end
Adds an alternate filter to an existing filter using OR. If no filter exists an Error is raised.
DB[:items].filter(:a).or(:b) # SELECT * FROM items WHERE a OR b
# File lib/sequel/dataset/query.rb, line 513 513: def or(*cond, &block) 514: clause = (@opts[:having] ? :having : :where) 515: raise(InvalidOperation, "No existing filter found.") unless @opts[clause] 516: cond = cond.first if cond.size == 1 517: clone(clause => SQL::BooleanExpression.new(:OR, @opts[clause], filter_expr(cond, &block))) 518: end
Returns a copy of the dataset with the order changed. If the dataset has an existing order, it is ignored and overwritten with this order. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, such as SQL functions. If a block is given, it is treated as a virtual row block, similar to filter.
DB[:items].order(:name) # SELECT * FROM items ORDER BY name DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b DB[:items].order('a + b'.lit) # SELECT * FROM items ORDER BY a + b DB[:items].order(:a + :b) # SELECT * FROM items ORDER BY (a + b) DB[:items].order(:name.desc) # SELECT * FROM items ORDER BY name DESC DB[:items].order(:name.asc(:nulls=>:last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC DB[:items].order(nil) # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 534 534: def order(*columns, &block) 535: columns += Array(Sequel.virtual_row(&block)) if block 536: clone(:order => (columns.compact.empty?) ? nil : columns) 537: end
Alias of order_more, for naming consistency with order_prepend.
# File lib/sequel/dataset/query.rb, line 540 540: def order_append(*columns, &block) 541: order_more(*columns, &block) 542: end
Returns a copy of the dataset with the order columns added to the end of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_more(:b) # SELECT * FROM items ORDER BY a, b
# File lib/sequel/dataset/query.rb, line 554 554: def order_more(*columns, &block) 555: columns = @opts[:order] + columns if @opts[:order] 556: order(*columns, &block) 557: end
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a
# File lib/sequel/dataset/query.rb, line 564 564: def order_prepend(*columns, &block) 565: ds = order(*columns, &block) 566: @opts[:order] ? ds.order_more(*@opts[:order]) : ds 567: end
Qualify to the given table, or first source if not table is given.
DB[:items].filter(:id=>1).qualify # SELECT items.* FROM items WHERE (items.id = 1) DB[:items].filter(:id=>1).qualify(:i) # SELECT i.* FROM items WHERE (i.id = 1)
# File lib/sequel/dataset/query.rb, line 576 576: def qualify(table=first_source) 577: qualify_to(table) 578: end
Return a copy of the dataset with unqualified identifiers in the SELECT, WHERE, GROUP, HAVING, and ORDER clauses qualified by the given table. If no columns are currently selected, select all columns of the given table.
DB[:items].filter(:id=>1).qualify_to(:i) # SELECT i.* FROM items WHERE (i.id = 1)
# File lib/sequel/dataset/query.rb, line 587 587: def qualify_to(table) 588: o = @opts 589: return clone if o[:sql] 590: h = {} 591: (o.keys & QUALIFY_KEYS).each do |k| 592: h[k] = qualified_expression(o[k], table) 593: end 594: h[:select] = [SQL::ColumnAll.new(table)] if !o[:select] || o[:select].empty? 595: clone(h) 596: end
Qualify the dataset to its current first source. This is useful if you have unqualified identifiers in the query that all refer to the first source, and you want to join to another table which has columns with the same name as columns in the current dataset. See qualify_to.
DB[:items].filter(:id=>1).qualify_to_first_source # SELECT items.* FROM items WHERE (items.id = 1)
# File lib/sequel/dataset/query.rb, line 606 606: def qualify_to_first_source 607: qualify_to(first_source) 608: end
Returns a copy of the dataset with the order reversed. If no order is given, the existing order is inverted.
DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC DB[:items].order(:id).reverse(:name.asc) # SELECT * FROM items ORDER BY name ASC
# File lib/sequel/dataset/query.rb, line 616 616: def reverse(*order) 617: order(*invert_order(order.empty? ? @opts[:order] : order)) 618: end
Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to filter.
DB[:items].select(:a) # SELECT a FROM items DB[:items].select(:a, :b) # SELECT a, b FROM items DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/query.rb, line 632 632: def select(*columns, &block) 633: columns += Array(Sequel.virtual_row(&block)) if block 634: m = [] 635: columns.each do |i| 636: i.is_a?(Hash) ? m.concat(i.map{|k, v| SQL::AliasedExpression.new(k,v)}) : m << i 637: end 638: clone(:select => m) 639: end
Returns a copy of the dataset selecting the wildcard.
DB[:items].select(:a).select_all # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 644 644: def select_all 645: clone(:select => nil) 646: end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items DB[:items].select_append(:b) # SELECT *, b FROM items
# File lib/sequel/dataset/query.rb, line 655 655: def select_append(*columns, &block) 656: cur_sel = @opts[:select] 657: cur_sel = [WILDCARD] if !cur_sel || cur_sel.empty? 658: select(*(cur_sel + columns), &block) 659: end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected it will just select the columns given.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_more(:b) # SELECT a, b FROM items DB[:items].select_more(:b) # SELECT b FROM items
# File lib/sequel/dataset/query.rb, line 668 668: def select_more(*columns, &block) 669: columns = @opts[:select] + columns if @opts[:select] 670: select(*columns, &block) 671: end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (which is SELECT uses :read_only database and all other queries use the :default database). This method is always available but is only useful when database sharding is being used.
DB[:items].all # Uses the :read_only or :default server DB[:items].delete # Uses the :default server DB[:items].server(:blah).delete # Uses the :blah server
# File lib/sequel/dataset/query.rb, line 682 682: def server(servr) 683: clone(:server=>servr) 684: end
Set the default values for insert and update statements. The values hash passed to insert or update are merged into this hash, so any values in the hash passed to insert or update will override values passed to this method.
DB[:items].set_defaults(:a=>'a', :c=>'c').insert(:a=>'d', :b=>'b') # INSERT INTO items (a, c, b) VALUES ('d', 'c', 'b')
# File lib/sequel/dataset/query.rb, line 692 692: def set_defaults(hash) 693: clone(:defaults=>(@opts[:defaults]||{}).merge(hash)) 694: end
Set values that override hash arguments given to insert and update statements. This hash is merged into the hash provided to insert or update, so values will override any values given in the insert/update hashes.
DB[:items].set_overrides(:a=>'a', :c=>'c').insert(:a=>'d', :b=>'b') # INSERT INTO items (a, c, b) VALUES ('a', 'c', 'b')
# File lib/sequel/dataset/query.rb, line 702 702: def set_overrides(hash) 703: clone(:overrides=>hash.merge(@opts[:overrides]||{})) 704: end
Unbind bound variables from this dataset‘s filter and return an array of two objects. The first object is a modified dataset where the filter has been replaced with one that uses bound variable placeholders. The second object is the hash of unbound variables. You can then prepare and execute (or just call) the dataset with the bound variables to get results.
ds, bv = DB[:items].filter(:a=>1).unbind ds # SELECT * FROM items WHERE (a = $a) bv # {:a => 1} ds.call(:select, bv)
# File lib/sequel/dataset/query.rb, line 716 716: def unbind 717: u = Unbinder.new 718: ds = clone(:where=>u.transform(opts[:where]), :join=>u.transform(opts[:join])) 719: [ds, u.binds] 720: end
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
:alias : | Use the given value as the from_self alias |
:all : | Set to true to use UNION ALL instead of UNION, so duplicate rows can occur |
:from_self : | Set to false to not wrap the returned dataset in a from_self, use with care. |
DB[:items].union(DB[:other_items]).sql #=> "SELECT * FROM items UNION SELECT * FROM other_items" DB[:items].union(DB[:other_items], :all=>true, :from_self=>false) # SELECT * FROM items UNION ALL SELECT * FROM other_items DB[:items].union(DB[:other_items], :alias=>:i) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb, line 754 754: def union(dataset, opts={}) 755: opts = {:all=>opts} unless opts.is_a?(Hash) 756: compound_clone(:union, dataset, opts) 757: end
Add a condition to the WHERE clause. See filter for argument types.
DB[:items].group(:a).having(:a).filter(:b) # SELECT * FROM items GROUP BY a HAVING a AND b DB[:items].group(:a).having(:a).where(:b) # SELECT * FROM items WHERE b GROUP BY a HAVING a
# File lib/sequel/dataset/query.rb, line 780 780: def where(*cond, &block) 781: _filter(:where, *cond, &block) 782: end
Add a common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query. Options:
:args : | Specify the arguments/columns for the CTE, should be an array of symbols. |
:recursive : | Specify that this is a recursive CTE |
DB[:items].with(:items, DB[:syx].filter(:name.like('A%'))) # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%')) SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 792 792: def with(name, dataset, opts={}) 793: raise(Error, 'This datatset does not support common table expressions') unless supports_cte? 794: clone(:with=>(@opts[:with]||[]) + [opts.merge(:name=>name, :dataset=>dataset)]) 795: end
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE. Options:
:args : | Specify the arguments/columns for the CTE, should be an array of symbols. |
:union_all : | Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts. |
DB[:t].select(:i___id, :pi___parent_id). with_recursive(:t, DB[:i1].filter(:parent_id=>nil), DB[:t].join(:t, :i=>:parent_id).select(:i1__id, :i1__parent_id), :args=>[:i, :pi]) # WITH RECURSIVE t(i, pi) AS ( # SELECT * FROM i1 WHERE (parent_id IS NULL) # UNION ALL # SELECT i1.id, i1.parent_id FROM t INNER JOIN t ON (t.i = t.parent_id) # ) # SELECT i AS id, pi AS parent_id FROM t
# File lib/sequel/dataset/query.rb, line 814 814: def with_recursive(name, nonrecursive, recursive, opts={}) 815: raise(Error, 'This datatset does not support common table expressions') unless supports_cte? 816: clone(:with=>(@opts[:with]||[]) + [opts.merge(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]) 817: end
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo
# File lib/sequel/dataset/query.rb, line 823 823: def with_sql(sql, *args) 824: sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? 825: clone(:sql=>sql) 826: end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset/query.rb, line 831 831: def options_overlap(opts) 832: !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty? 833: end
Whether this dataset is a simple SELECT * FROM table.
# File lib/sequel/dataset/query.rb, line 836 836: def simple_select_all? 837: o = @opts.reject{|k,v| v.nil? || NON_SQL_OPTIONS.include?(k)} 838: o.length == 1 && (f = o[:from]) && f.length == 1 && (f.first.is_a?(Symbol) || f.first.is_a?(SQL::AliasedExpression)) 839: end
These are methods you can call to see what SQL will be generated by the dataset.