Search Results

Search found 8340 results on 334 pages for 'merge join'.

Page 294/334 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Pinax TemplateSyntaxError

    - by Spikie
    hi, i ran into this errors while trying to modify pinax database model i am using eclipse pydev i have this error on the pydev Exception Type: TemplateSyntaxError at / Exception Value: Caught an exception while rendering: (1146, "Table 'test1.announcements_announcement' doesn't exist") please how do i correct this UPDATE: i asked this question and left unresolved some months back and you what ran into the bug again this week and typed the error message in google hit the page with the question and unanswered so i think i have to answer it and hope it help someone in the future have the same problem. some the problem is that the sqlite path is out of place so django or this case pinax can not find it so to resolve that change the absolute path to sqlite like it DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'. DATABASE_NAME = os.path.join(PROJECT_ROOT,'dev.db' ) # Or path to database file if using sqlite3. DATABASE_USER = '' # Not used with sqlite3. DATABASE_PASSWORD = '' # Not used with sqlite3. DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. i hope that help

    Read the article

  • How do I recover from pushing a gitosis.conf file with parsing errors due to line breaks?

    - by Kasia
    I have successfully set up gitosis for an Android mirror (containing multiple git repositories). While adding a new .git path following writable= in gitosis.conf I managed to insert a few line breaks. Saved, committed and pushed to server when I received the following parsing error: Traceback (most recent call last): File "/usr/bin/gitosis-run-hook", line 8, in load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-run-hook')() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 24, in run return app.main() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 38, in main self.handle_args(parser, cfg, options, args) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 75, in handle_args post_update(cfg, git_dir) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 33, in post_update cfg.read(os.path.join(export, '..', 'gitosis.conf')) File "/usr/lib/python2.5/ConfigParser.py", line 267, in read self._read(fp, filename) File "/usr/lib/python2.5/ConfigParser.py", line 490, in _read raise e ConfigParser.ParsingError: File contains parsing errors: ./gitosis-export/../gitosis.conf (...) I have removed the line break and amendend the commit by git commit -m "fix linebreak" --amend However git push still yields the exact same error. It leads me to believe gitosis is preventing me from doing any further pushes. How do I recover from this?

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • Exclude rows with Doctrine ORM DQL (NOT IN)

    - by Sheriffen
    I'm building a chat application with codeigniter and doctrine. Tables: - User - User_roles - User_available Relations: ONE user have MANY roles. ONE user_available have ONE user. Users available for chatting will be in the user_available table. Problem: I need to get all users in in user_available that hasn't got role_id 7. So I need to express in DQL something like (this is not even SQL, just in words): SELECT * from user_available WHERE NOT user_available.User.Role.role_id = 7 Really stuck on this one EDIT: Guess I was unclear. The tables are already mapped and Doctrine does the INNER JOIN job for me. I'm using this code to get the admin that waited the longest but now I need the user: $admin = Doctrine_Query::create() ->select('c.id') ->from('Chat_available c') ->where('c.User.Roles.role_id = ?', 7) ->groupBy('c.id') ->orderBy('c.created_at ASC') ->fetchOne(); Now I need to get the user that waited the longest but this does NOT work $admin = Doctrine_Query::create() ->select('c.id') ->from('Chat_available c') ->where('c.User.Roles.role_id != ?', 7) ->groupBy('c.id') ->orderBy('c.created_at ASC') ->fetchOne();

    Read the article

  • Testing a patch to the Rails mysql adapter

    - by Sleepycat
    I wrote a little monkeypatch to the Rails MySQLAdapter and want to package it up to use it in my other projects. I am trying to write some tests for it but I am still new to testing and I am not sure how to test this. Can someone help get me started? Here is the code I want to test: unless RAILS_ENV == 'production' module ActiveRecord module ConnectionAdapters class MysqlAdapter < AbstractAdapter def select_with_explain(sql, name = nil) explanation = execute_with_disable_logging('EXPLAIN ' + sql) e = explanation.all_hashes.first exp = e.collect{|k,v| " | #{k}: #{v} "}.join log(exp, 'Explain') select_without_explain(sql, name) end def execute_with_disable_logging(sql, name = nil) #:nodoc: #Run a query without logging @connection.query(sql) rescue ActiveRecord::StatementInvalid => exception if exception.message.split(":").first =~ /Packets out of order/ raise ActiveRecord::StatementInvalid, "'Packets out of order' error was received from the database. Please update your mysql bindings (gem install mysql) and read http://dev.mysql.com/doc/mysql/en/password-hashing.html for more information. If you're on Windows, use the Instant Rails installer to get the updated mysql bindings." else raise end end alias_method_chain :select, :explain end end end end Thanks.

    Read the article

  • Django: Serving Media Behind Custom URL

    - by TheLizardKing
    So I of course know that serving static files through Django will send you straight to hell but I am confused on how to use a custom url to mask the true location of the file using Django. http://stackoverflow.com/questions/2681338/django-serving-a-download-in-a-generic-view but the answer I accepted seems to be the "wrong" way of doing things. urls.py: url(r'^song/(?P<song_id>\d+)/download/$', song_download, name='song_download'), views.py: def song_download(request, song_id): song = Song.objects.get(id=song_id) fsock = open(os.path.join(song.path, song.filename)) response = HttpResponse(fsock, mimetype='audio/mpeg') response['Content-Disposition'] = "attachment; filename=%s - %s.mp3" % (song.artist, song.title) return response This solution works perfectly but not perfectly enough it turns out. How can I avoid having a direct link to the mp3 while still serving through nginx/apache? EDIT 1 - ADDITIONAL INFO Currently I can get my files by using an address such as: http://www.example.com/music/song/1692/download/ But the above mentioned method is the devil's work. How can I accomplished what I get above while still making nginx/apache serve the media? Is this something that should be done at the webserver level? Some crazy mod_rewrite? http://static.example.com/music/Aphex%20Twin%20-%20Richard%20D.%20James%20(V0)/10%20Logon-Rock%20Witch.mp3

    Read the article

  • How do I represent concurrent actions in jBPM, any of which can end a process?

    - by tpdi
    An example: a permit must be examined by two lawyers and one engineer. If any of those three reject it, the process enters a "rejected" end state. If all three grant the permit, it enters a "granted" end state. All three examiners may examine simultaneously, or in any order. Once one engineer has granted it, it shouldn't be available to be examined by an engineer; once two lawyers have examined it, it shouldn't be available to lawyers; once one engineer and two lawyers have examined it should go to the granted end state. My initial thinking is that either I have a overly complicated state transition diagram, with "the same" intermediate states multiply repeated, or I carry (external) state with the process { bool rejected; int engineerSignoffId; int lawyer1SignoffId; int lawyer2SignoffId}. Or something like this? If so, how does the engineer's rejection terminate the subprocess that is in "Lawyers"? START->FORK->Engineer->Granted?---------------->Y->JOIN-->Granted |->Lawyers-->Granted?->by 2 lawyers?->Y---^ ^ | |--------------------------N What's the canonical jBPM answer to this? Can you point me to examples or documentation of such answers? Thanks.

    Read the article

  • Match entities fulfilling filter (strict superset of search)

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. That is, given a set of Attributes A, I need to find all people that have a set of Attributes that are a superset of A. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there? I've been trying to do something like the following, given that the ValidValue is unique in all cases: select distinct p from Person p join p.personAttributes a where a.value IN (:values) Then I've tried putting my set of required values in as "values", but that gives me errors no matter how I try to structure that. I also have to get a little more complicated, as follows, but at this point I'd be happy with solving the first problem cleanly. However, if it's possible, the Attribute table actually has a field for default value: id | attr_name | default_value 1 | Sex | 1 2 | Eye Color | 5 If the value you're searching on happens to be the default value, I want it to return any people that have no explicit value set for that attribute, because in the application logic, that means they inherit the default value. Again, I'm more concerned about the primary question, but if someone who can help with that also has some idea of how to do this one, I'd be extremely grateful.

    Read the article

  • LINQ query code for complex merging of data.

    - by Stacey
    I've posted this before, but I worded it poorly. I'm trying again with a more well thought out structure. Re-writing this a bit to make it more clear. I have the following code and I am trying to figure out the shorter linq expression to do it 'inline'. Please examine the "Run()" method near the bottom. I am attempting to understand how to join two dictionaries together based on a matching identifier in one of the objects - so that I can use the query in this sort of syntax. var selected = from a in items.List() // etc. etc. select a; This is my class structure. The Run() method is what I am trying to simplify. I basically need to do this conversion inline in a couple of places, and I wanted to simplify it a great deal so that I can define it more 'cleanly'. class TModel { public Guid Id { get; set; } } class TModels : List<TModel> { } class TValue { } class TStorage { public Dictionary<Guid, TValue> Items { get; set; } } class TArranged { public Dictionary<TModel, TValue> Items { get; set; } } static class Repository { static public TItem Single<TItem, TCollection>(Predicate<TItem> expression) { return default(TItem); // access logic. } } class Sample { public void Run() { TStorage tStorage = new TStorage(); // access tStorage logic here. Dictionary<TModel, TValue> d = new Dictionary<TModel, TValue>(); foreach (KeyValuePair<Guid, TValue> kv in tStorage.Items) { d.Add(Repository.Single<TModel, TModels>(m => m.Id == kv.Key),kv.Value); } } }

    Read the article

  • Rails preventing duplicates in polymorphic has_many :through associations

    - by seaneshbaugh
    Is there an easy or at least elegant way to prevent duplicate entries in polymorphic has_many through associations? I've got two models, stories and links that can be tagged. I'm making a conscious decision to not use a plugin here. I want to actually understand everything that's going on and not be dependent on someone else's code that I don't fully grasp. To see what my question is getting at, if I run the following in the console (assuming the story and tag objects exist in the database already) s = Story.find_by_id(1) t = Tag.find_by_id(1) s.tags << t s.tags << t My taggings join table will have two entries added to it, each with the same exact data (tag_id = 1, taggable_id = 1, taggable_type = "Story"). That just doesn't seem very proper to me. So in an attempt to prevent this from happening I added the following to my Tagging model: before_validation :validate_uniqueness def validate_uniqueness taggings = Tagging.find(:all, :conditions => { :tag_id => self.tag_id, :taggable_id => self.taggable_id, :taggable_type => self.taggable_type }) if !taggings.empty? return false end return true end And it works almost as intended, but if I attempt to add a duplicate tag to a story or link I get an ActiveRecord::RecordInvalid: Validation failed exception. It seems that when you add an association to a list it calls the save! (rather than save sans !) method which raises exceptions if something goes wrong rather than just returning false. That isn't quite what I want to happen. I suppose I can surround any attempts to add new tags with a try/catch but that goes against the idea that you shouldn't expect your exceptions and this is something I fully expect to happen. Is there a better way of doing this that won't raise exceptions when all I want to do is just silently not save the object to the database because a duplicate exists?

    Read the article

  • Linq Getting Customers group by date and then by their type

    - by Nitin varpe
    I am working on generating report for showing customer using LINQ in C#. I want to show no. of customers of each type. There are 3 types of customer registered, guest and manager. I want to group by customers by registered date and then by type of customer. i.e If today 3 guest, 4 registered and 2 manager are inserted. and tomorrow 4,5 and 6 are registered resp. then report should show Number of customers registerd on the day . separate row for each type. DATE TYPEOF CUSTOMER COUNT 31-10-2013 GUEST 3 31-10-2013 REGISTERED 4 31-10-2013 MANAGER 2 30-10-2013 GUEST 5 30-10-2013 REGISTERED 10 30-10-2013 MANAGER 3 LIKE THIS . var subquery = from eat in _customerRepo.Table group eat by new { yy = eat.CreatedOnUTC.Value.Year, mm = eat.CreatedOnUTC.Value.Month, dd = eat.CreatedOnUTC.Value.Day } into g select new { Id = g.Min(x => x.Id) }; var query = from c in _customerRepo.Table join cin in subquery.Distinct() on c.Id equals cin.Id select c; By above query I get minimum cutomers registerd on that day Thanks in advance

    Read the article

  • Linq-to-SQL: How to shape the data with group by?

    - by Cheeso
    I have an example database, it contains tables for Movies, People and Credits. The Movie table contains a Title and an Id. The People table contains a Name and an Id. The Credits table relates Movies to the People that worked on those Movies, in a particular role. The table looks like this: CREATE TABLE [dbo].[Credits] ( [Id] [int] IDENTITY (1, 1) NOT NULL PRIMARY KEY, [PersonId] [int] NOT NULL FOREIGN KEY REFERENCES People(Id), [MovieId] [int] NOT NULL FOREIGN KEY REFERENCES Movies(Id), [Role] [char] (1) NULL In this simple example, the [Role] column is a single character, by my convention either 'A' to indicate the person was an actor on that particular movie, or 'D' for director. I'd like to perform a query on a particular person that returns the person's name, plus a list of all the movies the person has worked on, and the roles in those movies. If I were to serialize it to json, it might look like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "roles": ["actor", "director"] }, { "title": "Sands of Iwo Jima", "roles": ["director"] }, { "title": "Dirty Harry", "roles": ["actor"] }, ... ] } How can I write a LINQ-to-SQL query that shapes the output like that? I'm having trouble doing it efficiently. if I use this query: int personId = 10007; var persons = from p in db.People where p.Id == personId select new { name = p.Name, movies = (from m in db.Movies join c in db.Credits on m.Id equals c.MovieId where (c.PersonId == personId) select new { title = m.Title, role = (c.Role=="D"?"director":"actor") }) }; I get something like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "role": "actor" }, { "title": "Unforgiven", "role": "director" }, { "title": "Sands of Iwo Jima", "role": "director" }, { "title": "Dirty Harry", "role": "actor" }, ... ] } ...but as you can see there's a duplicate of each movie for which Eastwood played multiple roles. How can I shape the output the way I want?

    Read the article

  • TSQL to insert an ascending value

    - by David Neale
    I am running some SQL that identifies records which need to be marked for deletion and to insert a value into those records. This value must be changed to render the record useless and each record must be changed to a unique value because of a database constraint. UPDATE Users SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 FROM Users a LEFT OUTER JOIN #ADUSERS b ON a.Username = 'AVSOMPOL\' + b.sAMAccountName WHERE (b.sAMAccountName is NULL AND a.Username LIKE 'AVSOMPOL%') OR b.userAccountControl = 514 This is the important bit: SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 What I've tried to do is have deleted records have their Username field set to 'Deletedxxx'. The ISNULL is needed because there may be no records matching the SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%' statement and this will return NULL. I get a syntax error when trying to parse this (Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'SELECT'. Msg 102, Level 15, State 1, Line 2 Incorrect syntax near ')'. I'm sure there must be a better way to go about this, any ideas?

    Read the article

  • How can I extend this SQL query to find the k nearest neighbors?

    - by Smigs
    I have a database full of two-dimensional data - points on a map. Each record has a field of the geometry type. What I need to be able to do is pass a point to a stored procedure which returns the k nearest points (k would also be passed to the sproc, but that's easy). I've found a query at http://blogs.msdn.com/isaac/archive/2008/10/23/nearest-neighbors.aspx which gets the single nearest neighbour, but I can't figure how to extend it to find the k nearest neighbours. This is the current query - T is the table, g is the geometry field, @x is the point to search around, Numbers is a table with integers 1 to n: DECLARE @start FLOAT = 1000; WITH NearestPoints AS ( SELECT TOP(1) WITH TIES *, T.g.STDistance(@x) AS dist FROM Numbers JOIN T WITH(INDEX(spatial_index)) ON T.g.STDistance(@x) < @start*POWER(2,Numbers.n) ORDER BY n ) SELECT TOP(1) * FROM NearestPoints ORDER BY n, dist The inner query selects the nearest non-empty region and the outer query then selects the top result from that region; the outer query can easily be changed to (e.g.) SELECT TOP(20), but if the nearest region only contains one result, you're stuck with that. I figure I probably need to recursively search for the first region containing k records, but without using a table variable (which would cause maintenance problems as you have to create the table structure and it's liable to change - there're lots of fields), I can't see how.

    Read the article

  • How do I find the module dependencies of my Perl script?

    - by zoul
    I want another developer to run a Perl script I have written. The script uses many CPAN modules that have to be installed before the script can be run. Is it possible to make the script (or the perl binary) to dump a list of all the missing modules? Perl prints out the missing modules’ names when I attempt to run the script, but this is verbose and does not list all the missing modules at once. I’d like to do something like: $ cpan -i `said-script --list-deps` Or even: $ list-deps said-script > required-modules # on my machine $ cpan -i `cat required-modules` # on his machine Is there a simple way to do it? This is not a show stopper, but I would like to make the other developer’s life easier. (The required modules are sprinkled across several files, so that it’s not easy for me to make the list by hand without missing anything. I know about PAR, but it seems a bit too complicated for what I want.) Update: Thanks, Manni, that will do. I did not know about %INC, I only knew about @INC. I settled with something like this: print join("\n", map { s|/|::|g; s|\.pm$||; $_ } keys %INC); Which prints out: Moose::Meta::TypeConstraint::Registry Moose::Meta::Role::Application::ToClass Class::C3 List::Util Imager::Color … Looks like this will work.

    Read the article

  • How to combine a Distance and Keyword SQL query?

    - by Jason
    Hi Folks, I have a tables in my database called "points" and "category". A user will input info into both a location input and a keyword input text box. Then I want to find points in my table where the keyword matches either the "title" field in the points table, or the "category" but are within a certain distance from the user's location. I want to order the results by distance. Here are the 2 queries which btoh work independently: $mysql = "SELECT *, ( 3959 * acos( cos( radians('$search_lat') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('$search_lng') ) + sin( radians('$search_lat') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '$radius'"; $mysql2 = "SELECT * FROM `points` LEFT JOIN category USING ( category_id ) WHERE (point_title LIKE '%$esc_catsearch%' OR category.title LIKE '%$esc_catsearch%')"; Here is what I tried: $sql_search = sprintf("SELECT *,point_id FROM points WHERE point_title LIKE '%%%s%%' UNION SELECT *, ( 3959 * acos( cos( radians('%s') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('%s') ) + sin( radians('%s') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '%s' ORDER BY distance LIMIT %d , %d", $esc_catsearch, mysql_real_escape_string($search_lat), mysql_real_escape_string($search_lng), mysql_real_escape_string($search_lat), mysql_real_escape_string($radius), $offset, $rowsPerPage); But it tells me there is no know column "distance". If I remove the "Order By" phrase then it works but I'm still not sure this is giving me the results I want. I also tried the query the other way around with the distance search first but that seems to ignore my keyword. Any thoughts would be much appreciated!

    Read the article

  • PL/SQL - How to pull data from 3 tables based on latest created date

    - by Nancy
    Hello, I'm hoping someone can help me as I've been stuck on this problem for a few days now. Basically I'm trying to pull data from 3 tables in Oracle: 1) Orders Table 2) Vendor Table and 3) Master Data Table. Here's what the 3 tables look like: Table 1: BIZ_DOC2 (Orders table) OBJECTID (Unique key) UNIQUE_DOC_NAME (Document Name i.e. ORD-005) CREATED_AT (Date the order was created) Table 2: UDEF_VENDOR (Vendors Table): PARENT_OBJECT_ID (This matches up to the ObjectId in the Orders table) VENDOR_OBJECT_NAME (This is the name of the vendor i.e. Acme) Table 3: BIZ_UNIT (Master Data table) PARENT_OBJECT_ID (This matches up to the ObjectID in the Orders table) BIZ_UNIT_OBJECT_NAME (This is the name of the business unit i.e. widget A, widget B) Note: The Vendors Table and Master Data do not have a link between them except through the Orders table. I can join all of the data from the tables and it looks something like this: Before selecting latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 ORD-004 | Widget C | Acme | 3/10/10 Ideally I'd like to return the latest order for each vendor. However, each order may contain multiple business units (e.g. types of widgets) so if a Vendor's latest record is ORD-005 and the order contains 2 business units, here's what the result set should look like by the following columns: UNIQUE_DOC_NAME, BIZ_UNIT_OBJECT_NAME, VENDOR_OBJECT_NAME, CREATED_AT After selecting by latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 I tried using Select Max and several variations of sub-queries but I just can't seem to get it working. Any help would be hugely appreciated!

    Read the article

  • nhibernate - mapping with contraints

    - by Tobias Müller
    Hello everybody, I am having a Problem with my nhibernate-mapping and I can't find a solution by searching on stackoverflow/google/documentation. The database I am using has (amongst others) two tables. One is unit with the following fields: id enduring_id starts ends damage_enduring_id [...] The other one is damage, which has the following fields: id enduring_id starts ends [...] The units are assigned to a damage and one damage can have zero, one or more units working on it. Every time a unit moves to annother damage, the dataset is copied. The field "ends" of the old record and "starts" of the new record are set to the current time stamp, enduring_id stays the same. So if I want to know which units were working on a damage at a certain time, I do the following select: select * from unit join damage on damage.enduring_id = unit.damage_enduring_id where unit.starts <= 'time' and unit.ends = 'time' (This is not an actualy query from the database, I made it up to make clear what I mean. The the real database is a little more complex) Now I want to map it that way, so I can load all the damages which are valid at one time (starts <= wanted time <= ends) and that each of them has a Bag with all the attached units at that time (again starts <= wanted time <= ends). Is this possible within the mapping? Sorry if this is a stupid question, but I am pretty new to nhibernate and I have no clue how to do it. Thanks a lot for reading my post! Bye, Tobias

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Rails paginate array items one-by-one instead of page-by-page

    - by hnovick
    Hi Guys, I have a group of assets, let's call them "practitioners". I'm displaying these practitioners in the header of a calendar interface. There are 7 columns to the calendar. 7 columns = 7 practitioners per view/page. Right now: if the first page shows you practitioners 1-7, when you go the next page you will see practitioners 8-15, next page 16-23, etc. etc. i am wondering how to page the practitioners so that if the first page shows you practitioners 1-7, the next page will show you practitioners 2-8, then 3-9, etc. etc. i would greatly appreciate any help you can offer. here is the rails code i am working with. best regards, harris novick # get the default sort order sort_order = RESOURCE_SORT_ORDER # if we've been given asset ids, start our list with them unless params[:asset_ids].blank? params[:asset_ids] = params[:asset_ids].values unless params[:asset_ids].is_a?(Array) sort_order = "#{params[:asset_ids].collect{|id| "service_provider_resources.id = #{id} DESC"}.join(",")}, #{sort_order}" end @asset_set = @provider.active_resources(:include => {:active_services => :latest_approved_version}).paginate( :per_page => RESOURCES_IN_DAY_VIEW, :page => params[:page], :order => sort_order )

    Read the article

  • Mysql Server Optimization

    - by Ish Kumar
    Hi Geeks, We are having serious MySQL(InnoDB) performance issues at a moment when we do: (10-20) insertions on TABLE1 (10-20) updates on TABLE2 Note: Both above operations happens within fraction of a second. And this occurs every few (10-15) minutes. And all online users (approx 400-600) doing read operation on join of TABLE1 & TABLE2 every 1 second. Here is our mysql configuration info: http://docs.google.com/View?id=dfrswh7c_117fmgcmb44 Issues: Lot queries wait and expire later (saw it from phpmyadmin / processes). My poor MySQL server crashes sometimes Questions Q1: Any suggestions to optimize at MySQL level? Q2: I thinking to use persistent connections at application level, is it right? Info Added Later: Database Engine: InnoDB TABLE1 : 400,000 rows (inserting 8,000 daily) & TABLE2: 8,000 rows 1 second query: SELECT b.id, b.user_id, b.description, b.debit, b.created, b.price, u.username, u.email, u.mobile FROM TABLE1 b, TABLE2 u WHERE b.credit = 0 AND b.user_id = u.id AND b.auction_id = "12345" ORDER BY b.id DESC LIMIT 10; // there are few more but they are not so critical. Indexing is good, we are using them wisely. In above query all id's are indexed And TABLE1 has frequent insertions and TABLE2 has frequent updates.

    Read the article

  • Issue in creating Zip file using glob.glob

    - by infosyssec
    Hi, I am creating a Zip file from a folder (and subfolders). it works fine and creates a new .zip file also but I am having an issue while using glob.glob. It is reading all files from the desired folder (source folder) and writing to the new zip file but the problem is that it is, however, adding subdirectories, but not adding files form the subdirectories. I am giving user an option to select the filename and path as well as filetype also (Zip or Tar). I don;t get any problem while creating .tar.gz file, but when use creates .zip file, this problem comes across. Here is my code: for name in (Source_Dir): for name in glob.glob("/path/to/source/dir/*" ): myZip.write(name, os.path.basename(name), zipfile.ZIP_DEFLATED) myZip.close() Also, if I use code below: for dirpath, dirnames, filenames in os.walk(Source_Dir): myZip.write(os.path.join(dirpath, filename) os.path.basename(filename)) myZip.close() Now the 2nd code taks all files even if it inside the folder/ subfolders, creates a new .zip file and write to it without any directory strucure. It even does not take dir structure for main folder and simply write all files from main dir or subdir to that .zip file. Can anyone please help me or suggest me. I would prefer glob.glob rather than the 2nd option to use. Thanks in advance. Regards, Akash

    Read the article

  • Query Access and VB.NET

    - by yae
    Hi all: I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are vinculated with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obatin the price of the main product, but not runs. This query not returns any value SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5));

    Read the article

  • DBD::CSV: Problem with file-name-extensions

    - by sid_com
    In this script I have problems with file-name-extensions: if I use /home/mm/test_x it works, with file named /home/mm/test_x.csv it doesn't: #!/usr/bin/env perl use warnings; use strict; use 5.012; use DBI; my $table_1 = '/home/mm/test_1.csv'; my $table_2 = '/home/mm/test_2.csv'; #$table_1 = '/home/mm/test_1'; #$table_2 = '/home/mm/test_2'; my $dbh = DBI->connect( "DBI:CSV:" ); $dbh->{RaiseError} = 1; $table_1 = $dbh->quote_identifier( $table_1 ); $table_2 = $dbh->quote_identifier( $table_2 ); my $sth = $dbh->prepare( "SELECT a.id, a.name, b.city FROM $table_1 AS a NATURAL JOIN $table_2 AS b" ); $sth->execute; $sth->dump_results; $dbh->disconnect; Output with file-name-extention: DBD::CSV::st execute failed: Execution ERROR: No such column '"/home/mm/test_1.csv".id' called from /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux/DBD/File.pm at 570. Output without file-name-extension: '1', 'Brown', 'Laramie' '2', 'Smith', 'Watertown' 2 rows Is this a bug?

    Read the article

  • PHP: How To Integrate HTML Purifier To Fileter User Submitted Data?

    - by TaG
    I have this script that collects data from users and I wanted to check their data for malicious code like XSS and SQL injections by using HTML Purifier http://htmlpurifier.org/ but how do I add it to my php form submission script? Here is my HTML purifier code require_once '../../htmlpurifier/library/HTMLPurifier.auto.php'; $config = HTMLPurifier_Config::createDefault(); $config->set('Core.Encoding', 'UTF-8'); // replace with your encoding $config->set('HTML.Doctype', 'XHTML 1.0 Strict'); // replace with your doctype $purifier = new HTMLPurifier($config); $clean_html = $purifier->purify($dirty_html); Here is my PHP form submission code. if (isset($_POST['submitted'])) { // Handle the form. $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT users.*, profile.* FROM users INNER JOIN contact_info ON contact_info.user_id = users.user_id WHERE users.user_id=3"); $about_me = mysqli_real_escape_string($mysqli, $_POST['about_me']); $interests = mysqli_real_escape_string($mysqli, $_POST['interests']); if (mysqli_num_rows($dbc) == 0) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"INSERT INTO profile (user_id, about_me, interests) VALUES ('$user_id', '$about_me', '$interests')"); } if ($dbc == TRUE) { $dbc = mysqli_query($mysqli,"UPDATE profile SET about_me = '$about_me', interests = '$interests' WHERE user_id = '$user_id'"); echo '<p class="changes-saved">Your changes have been saved!</p>'; } if (!$dbc) { // There was an error...do something about it here... print mysqli_error($mysqli); return; } }

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >