Search Results

Search found 21972 results on 879 pages for 'on duplicate key'.

Page 122/879 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Using string[] as a Dictionary key e.g. Dictionary<string[], StringBuilder>

    - by Nick Allen - Tungle139
    The structure I am trying to achieve is a composite Dictionary key which is item name and item displayname and the Dictionary value being the combination of n strings So I came up with var pages = new Dictionary<string[], StringBuilder>() { { new string[] { "food-and-drink", "Food & Drink" }, new StringBuilder() }, { new string[] { "activities-and-entertainment", "Activities & Entertainment" }, new StringBuilder() } }; foreach (var obj in my collection) { switch (obj.Page) { case "Food": case "Drink": pages["KEY"].Append("obj.PageValue"); break; ... } } The part I am having trouble with is accessing the Dictionary Key pages["KEY"] How do I target the Dictionary Key whose value at [0] == some value? Hope that makes sense

    Read the article

  • Unable to create index because of duplicate that doesn't exist?

    - by Alex Angas
    I'm getting an error running the following Transact-SQL command: CREATE UNIQUE NONCLUSTERED INDEX IX_TopicShortName ON DimMeasureTopic(TopicShortName) The error is: Msg 1505, Level 16, State 1, Line 1 The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.DimMeasureTopic' and the index name 'IX_TopicShortName'. The duplicate key value is (). When I run SELECT * FROM sys.indexes WHERE name = 'IX_TopicShortName' or SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[DimMeasureTopic]') the IX_TopicShortName index does not display. So there doesn't appear to be a duplicate. I have the same schema in another database and can create the index without issues there. Any ideas why it won't create here?

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • DbMetal chokes on repeated foreign key references in SQLite - any ideas?

    - by DanM
    I've been struggling to get DbMetal to process my SQLite database. I finally isolated the problem. It won't allow a table to have two foreign key references to the same column. For example, a SQLite database with these two tables will fail: CREATE TABLE Person ( Id INTEGER PRIMARY KEY, Name TEXT NOT NULL ); CREATE TABLE Match ( Id INTEGER PRIMARY KEY, WinnerPersonId INTEGER NOT NULL REFERENCES Person(Id), LoserPersonId INTEGER NOT NULL REFERENCES Person(Id) ); I get this error: DbMetal: Sequence contains more than one matching element If I get rid of the second foreign key reference, no error occurs. So, this works: CREATE TABLE Match ( Id INTEGER PRIMARY KEY, WinnerPersonId INTEGER NOT NULL REFERENCES Person(Id), LoserPersonId INTEGER NOT NULL ); But I really need both "person" columns to reference the person table. I submitted a bug report for this, but I could use a workaround in the meantime. Any ideas?

    Read the article

  • How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    - by Gweebz
    I am working on a legacy code base with an existing DB schema. The existing code uses SQL and PL/SQL to execute queries on the DB. We have been tasked with making a small part of the project database-engine agnostic (at first, change everything eventually). We have chosen to use Hibernate 3.3.2.GA and "*.hbm.xml" mapping files (as opposed to annotations). Unfortunately, it is not feasible to change the existing schema because we cannot regress any legacy features. The problem I am encountering is when I am trying to map a uni-directional, one-to-many relationship where the FK is also part of a composite PK. Here are the classes and mapping file... CompanyEntity.java public class CompanyEntity { private Integer id; private Set<CompanyNameEntity> names; ... } CompanyNameEntity.java public class CompanyNameEntity implements Serializable { private Integer id; private String languageId; private String name; ... } CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This code works just fine for SELECT and INSERT of a Company with names. I encountered a problem when I tried to update and existing record. I received a BatchUpdateException and after looking through the SQL logs I saw Hibernate was trying to do something stupid... update COMPANY_NAME set COMPANY_ID=null where COMPANY_ID=? Hibernate was trying to dis-associate child records before updating them. The problem is that this field is part of the PK and not-nullable. I found the quick solution to make Hibernate not do this is to add "not-null='true'" to the "key" element in the parent mapping. SO now may mapping looks like this... CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID" not-null="true"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This mapping gives the exception... org.hibernate.MappingException: Repeated column in mapping for entity: companyName column: COMPANY_ID (should be mapped with insert="false" update="false") My problem now is that I have tryed to add these attributes to the key-property element but that is not supported by the DTD. I have also tryed changing it to a key-many-to-one element but that didn't work either. So... How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    Read the article

  • How to use SQL - INSERT...ON DUPLICATE KEY UPDATE?

    - by Probocop
    Hi, I have a script which captures tweets and puts them into a database. I will be running the script on a cronjob and then displaying the tweets on my site from the database to prevent hitting the limit on the twitter API. So I don't want to have duplicate tweets in my database, I understand I can use 'INSERT...ON DUPLICATE KEY UPDATE' to achieve this, but I don't quite understand how to use it. My database structure is as follows. Table - Hash id (auto_increment) tweet user user_url And currently my SQL to insert is as follows: $tweet = $clean_content[0]; $user_url = $clean_uri[0]; $user = $clean_name[0]; $query='INSERT INTO hash (tweet, user, user_url) VALUES ("'.$tweet.'", "'.$user.'", "'.$user_url.'")'; mysql_query($query); How would I correctly use 'INSERT...ON DUPLICATE KEY UPDATE' to insert only if it doesn't exist, and update if it does? Thanks

    Read the article

  • What is the representation of the mac command key in the terminal?

    - by freethinker
    Like control key is represented by a '^' in the terminal, what is the equivalent for the command key (mac)? I am trying to remap my bash shortcuts using stty For eg stty eof ^D But instead of control, I want to use the command key. EDIT: Okay so the issue I was trying to solve was that I wanted to interchange command and control keys because I work on osx and linux and the different key combinations cause me a lot of pain. So I interchanged the modifier keys using osx preferences. But now all the bash shortcuts like Ctrl+C etc had become equivalent of using the key sequences 'cmd+c' - which is not acceptable. Thankfully iTerm2, supports remapping of modifier keys as well, so for iterm2 I reversed them again which means iTerm2 recognizes command as command and control as control. So problem solved for now.

    Read the article

  • Is it a good idea to use only a key to encrypt an entire (small) filesystem?

    - by Fernando Miguélez
    This question comes as part of my doubts presented on a broader question about ideas implementing a small encrypted filesystem on Java Mobile phones (J2ME, BlackBerry, Android). Provided the litte feedback received, considering the density of the question, I decided to divide those doubts into small questions. So to sum up I plan to "create" an encrypted filesystem for for mobile phones (with the help of BoucyCastle or a subset of JCE), providing an API that let access to them in a transparent way. Encryption would be carried out on a file basis (not blocks). My question is this: Is it a good idea to use only a simmetric key (maybe AES-256) to encrypt all the files (they wouldn't be that many, maybe tens of them) and store this key in a keystore (protected by a pin) or would you rather encrypt each file with an on-the-fly generated key stored alongside each file, encrypting that key with the "master" key stored on the keystore? What are the benefits/drawbacks of each approach?

    Read the article

  • Lua : How to check if one of the values associated with the specified key of a table is nil, from ap

    - by felace
    In lua, it's legal to do this : table={} bar if(table[key]==nil) foo However, using C API, I couldn't find way to check if there's a nil value on the specified position. lua_getglobal(L,"table"); lua_gettable(L,key); If there's a nil value stored in table[key], lua_gettable would give me the "unprotected error in call to Lua API (attempt to index a nil value)" message. Is there any way to check if there's actually something associated with that key, before actually pushing the key to do so ?

    Read the article

  • H12 timeout error on Heroku

    - by snowangel
    Can anyone shed some light on what's causing this timeout error on Heroku (at 2012-07-08T08:58:33+00:00)? The docs say that it's because of some long running process. I've set config.assets.initialize_on_precompile = false in config/application.rb. EmBP-2:bc Emma$ heroku restart Restarting processes... done EmBP-2:bc Emma$ heroku logs --tail 2012-07-08T08:47:21+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.1" 200 311723 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:47:21+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.0" 200 1311615 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:51:32+00:00 heroku[slugc]: Slug compilation started 2012-07-08T08:54:05+00:00 heroku[api]: Release v145 created by [email protected] 2012-07-08T08:54:05+00:00 heroku[api]: Deploy 8814b2f by [email protected] 2012-07-08T08:54:05+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:54:06+00:00 heroku[slugc]: Slug compilation finished 2012-07-08T08:54:09+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 22429 -c ./config/unicorn.rb` 2012-07-08T08:54:10+00:00 app[worker.1]: [Worker(host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1)] Exiting... 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.320616 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376765 #1] INFO -- : master complete 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376272 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011695 #1] INFO -- : worker=0 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011386 #1] INFO -- : listening on addr=0.0.0.0:22429 fd=3 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.017917 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.019309 #1] INFO -- : master process ready 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.018250 #5] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.016768 #1] INFO -- : worker=1 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020863 #8] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020617 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:54:12+00:00 app[worker.1]: SQL (2.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1') 2012-07-08T08:54:12+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:54:13+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:54:14+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:54:20+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:54:20+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:47+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.467693 #8] INFO -- : worker=1 ready 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.823800 #5] INFO -- : worker=0 ready 2012-07-08T08:54:48+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:54:48+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:54:48+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1 2012-07-08T08:54:48+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:49+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Starting job worker 2012-07-08T08:57:54+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:57:56+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047386 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047753 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047999 #1] INFO -- : master complete 2012-07-08T08:57:57+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:58+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:57:58+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Exiting... 2012-07-08T08:57:59+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 29766 -c ./config/unicorn.rb` 2012-07-08T08:58:01+00:00 app[worker.1]: SQL (27.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1') 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070527 #1] INFO -- : listening on addr=0.0.0.0:29766 fd=3 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070782 #1] INFO -- : worker=0 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.074498 #1] INFO -- : worker=1 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.075702 #1] INFO -- : master process ready 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076732 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076957 #5] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089022 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089299 #8] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:58:02+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:58:10+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:58:11+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:33+00:00 heroku[router]: Error H12 (Request timeout) -> GET codicology.co.uk/ dyno=web.1 queue= wait= service=30000ms status=503 bytes=0 2012-07-08T08:58:33+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.0" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:33+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.1" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:42+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:58:42+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1 2012-07-08T08:58:42+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:58:42+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:58:43+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] Starting job worker 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:24+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:25+00:00 app[web.1]: I, [2012-07-08T08:59:25.555052 #5] INFO -- : worker=0 ready 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: Started GET "/" for 82.69.50.215 at 2012-07-08 08:59:25 +0000 2012-07-08T08:59:26+00:00 app[web.1]: Processing by PagesController#home as HTML 2012-07-08T08:59:26+00:00 app[web.1]: I, [2012-07-08T08:59:26.043501 #8] INFO -- : worker=1 ready 2012-07-08T08:59:26+00:00 app[web.1]: Rendered pages/home.html.haml within layouts/application (5.7ms) 2012-07-08T08:59:26+00:00 app[web.1]: (1.1ms) SELECT COUNT(*) FROM "delayed_jobs" 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_header.html.erb (4.2ms) 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_footer.html.haml (1.4ms) 2012-07-08T08:59:26+00:00 app[web.1]: Completed 200 OK in 326ms (Views: 258.4ms | ActiveRecord: 65.2ms)

    Read the article

  • Public key of Android project and keystore created in Eclipse?

    - by user578056
    I created an Android project using Eclipse (under Windows FWIW) and let Eclipse create the keypair during the Export Android Application process. I successfully used Eclipse to make a signed release build that is now on the Market. Now I want to now use ProGuard, which I believe means using Ant instead of Eclipse to build. It was a pain, but Ant building works in both debug and release, until it tries to sign the APK. I get: [signjar] jarsigner: Certificate chain not found for: redacted. redacted must reference a valid KeyStore key entry containing a private key and corresponding public key certificate chain. keytool -list -keystore redacted gives me: Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry redacted, Jan 16, 2011, PrivateKeyEntry, Certificate fingerprint (MD5): BD:0F:70:C1:39:F5:FE:5B:BC:CD:89:0B:C8:66:95:E0 Which brings me to the actual question: where is my public key? I have some sort of public key on my Android Market profile, but is that the pair for my private key? If so, how do I store that in the keystore so that jarsigner will work?

    Read the article

  • DSA signature verification input

    - by calccrypto
    What is the data inputted into DSA when PGP signs a message? From RFC4880, i found A Signature packet describes a binding between some public key and some data. The most common signatures are a signature of a file or a block of text, and a signature that is a certification of a User ID. im not sure if it is the entire public key, just the public key packet, or some other derivative of a pgp key packet. whatever it is, i cannot get the DSA signature to verify here is a sample im testing my program on: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 abcd -----BEGIN PGP SIGNATURE----- Version: BCPG v1.39 iFkEARECABkFAk0z65ESHGFiYyAodGVzdCBrZXkpIDw+AAoJEC3Jkh8+bnkusO0A oKG+HPF2Qrsth2zS9pK+eSCBSypOAKDBgC2Z0vf2EgLiiNMk8Bxpq68NkQ== =gq0e -----END PGP SIGNATURE----- Dumped from pgpdump.net Old: Signature Packet(tag 2)(89 bytes) Ver 4 - new Sig type - Signature of a canonical text document(0x01). Pub alg - DSA Digital Signature Algorithm(pub 17) Hash alg - SHA1(hash 2) Hashed Sub: signature creation time(sub 2)(4 bytes) Time - Mon Jan 17 07:11:13 UTC 2011 Hashed Sub: signer's User ID(sub 28)(17 bytes) User ID - abc (test key) <> Sub: issuer key ID(sub 16)(8 bytes) Key ID - 0x2DC9921F3E6E792E Hash left 2 bytes - b0 ed DSA r(160 bits) - a1 be 1c f1 76 42 bb 2d 87 6c d2 f6 92 be 79 20 81 4b 2a 4e DSA s(160 bits) - c1 80 2d 99 d2 f7 f6 12 02 e2 88 d3 24 f0 1c 69 ab af 0d 91 -> hash(DSA q bits) and the public key for it is: -----BEGIN PGP PUBLIC KEY BLOCK----- Version: BCPG v1.39 mOIETTPqeBECALx+i9PIc4MB2DYXeqsWUav2cUtMU1N0inmFHSF/2x0d9IWEpVzE kRc30PvmEHI1faQit7NepnHkkphrXLAoZukAoNP3PB8NRQ6lRF6/6e8siUgJtmPL Af9IZOv4PI51gg6ICLKzNO9i3bcUx4yeG2vjMOUAvsLkhSTWob0RxWppo6Pn6MOg dMQHIM5sDH0xGN0dOezzt/imAf9St2B0HQXVfAAbveXBeRoO7jj/qcGx6hWmsKUr BVzdQhBk7Sku6C2KlMtkbtzd1fj8DtnrT8XOPKGp7/Y7ASzRtBFhYmMgKHRlc3Qg a2V5KSA8PohGBBMRAgAGBQJNM+p5AAoJEC3Jkh8+bnkuNEoAnj2QnqGtdlTgUXCQ Fyvwk5wiLGPfAJ4jTGTL62nWzsgrCDIMIfEG2shm8bjMBE0z6ngQAgCUlP7AlfO4 XuKGVCs4NvyBpd0KA0m0wjndOHRNSIz44x24vLfTO0GrueWjPMqRRLHO8zLJS/BX O/BHo6ypjN87Af0VPV1hcq20MEW2iujh3hBwthNwBWhtKdPXOndJGZaB7lshLJuW v9z6WyDNXj/SBEiV1gnPm0ELeg8Syhy5pCjMAgCFEc+NkCzcUOJkVpgLpk+VLwrJ /Wi9q+yCihaJ4EEFt/7vzqmrooXWz2vMugD1C+llN6HkCHTnuMH07/E/2dzciEYE GBECAAYFAk0z6nkACgkQLcmSHz5ueS7NTwCdED1P9NhgR2LqwyS+AEyqlQ0d5joA oK9xPUzjg4FlB+1QTHoOhuokxxyN =CTgL -----END PGP PUBLIC KEY BLOCK----- the public key packet of the key is mOIETTPqeBECALx+i9PIc4MB2DYXeqsWUav2cUtMU1N0inmFHSF/2x0d9IWEpVzEkRc30PvmEHI1faQi t7NepnHkkphrXLAoZukAoNP3PB8NRQ6lRF6/6e8siUgJtmPLAf9IZOv4PI51gg6ICLKzNO9i3bcUx4ye G2vjMOUAvsLkhSTWob0RxWppo6Pn6MOgdMQHIM5sDH0xGN0dOezzt/imAf9St2B0HQXVfAAbveXBeRoO 7jj/qcGx6hWmsKUrBVzdQhBk7Sku6C2KlMtkbtzd1fj8DtnrT8XOPKGp7/Y7ASzR in radix 64 i have tried many different combinations of sha1(< some data + 'abcd'),but the calculated value v never equals r, of the signature i know that the pgp implementation i used to create the key and signature is correct. i also know that my DSA implementation and PGP key data extraction program are correct. thus, the only thing left is the data to hash. what is the correct data to be hashed?

    Read the article

  • java and threads: very strange behaviour

    - by Derk
    private synchronized Map<Team, StandingRow> calculateStanding() { System.out.println("Calculate standing for group " + getName()); Map<Team, StandingRow> standing = new LinkedHashMap<Team, StandingRow>(); for (Team team : teams) { standing.put(team, new StandingRow(team)); } StandingRow homeTeamRow, awayTeamRow; for (Match match : matches.values()) { homeTeamRow = standing.get(match.getHomeTeam()); awayTeamRow = standing.get(match.getAwayTeam()); System.out.println("Contains key for " + match.getHomeTeam() + ": " + standing.containsKey(match.getHomeTeam())); System.out.println("Contains key for " + match.getAwayTeam() + ": " + standing.containsKey(match.getAwayTeam())); } } This is my code. matches contains 6 elements, but the problem is that after two matches no keys are anymore found in the standing map. The output is for example Contains key for Zuid-Afrika: true Contains key for Mexico: true Contains key for Uruguay: true Contains key for Frankrijk: true Contains key for Zuid-Afrika: false Contains key for Uruguay: false Contains key for Frankrijk: false Contains key for Mexico: false Contains key for Mexico: false Contains key for Uruguay: false Contains key for Frankrijk: false Contains key for Zuid-Afrika: false This is in a threaded environment, but the method is synchronized so I thought that this would not give a problem? I have also a simple unit test for this method and that works well.

    Read the article

  • SQL Server: "Mostly-unique" index

    - by Ian Boyd
    In a table i want to ensure that only unique vales exist over the five-column key: Timestamp Account RatingDate TripHistoryKey EventAction ========= ======= ========== ============== =========== 2010511 1234 2010511 1 INSERT 2010511 1234 2010511 4 INSERT 2010511 1234 2010511 7 INSERT 2010511 1234 2010511 1 INSERT <---duplicate But i only want the unique constraint to apply between rows when EventAction is INSERT: Timestamp Account RatingDate TripHistoryKey EventAction ========= ======= ========== ============== =========== 2010511 1234 2010511 1 INSERT 2010511 1234 2010511 1 UPDATE 2010511 1234 2010511 1 UPDATE <---not duplicate 2010511 1234 2010511 1 UPDATE <---not duplicate 2010511 1234 2010511 1 DELETE <---not duplicate 2010511 1234 2010511 1 DELETE <---not duplicate 2010511 1234 2010511 1 INSERT <---DUPLICATE Possible?

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

  • How do you work around memcached's key/value limitations?

    - by mjy
    Memcached has length limitations for keys (250?) and values (roughtly 1MB), as well as some (to my knowledge) not very well defined character restrictions for keys. What is the best way to work around those in your opinion? I use the Perl API Cache::Memcached. What I do currently is store a special string for the main key's value if the original value was too big ("parts:<number") and in that case, I store <number parts with keys named 1+<main key, 2+<main key etc.. This seems "OK" (but messy) for some cases, not so good for others and it has the intrinsic problem that some of the parts might be missing at any time (so space is wasted for keeping the others and time is wasted reading them). As for the key limitations, one could probably implement hashing and store the full key (to work around collisions) in the value, but I haven't needed to do this yet. Has anyone come up with a more elegant way, or even a Perl API that handles arbitrary data sizes (and key values) transparently? Has anyone hacked the memcached server to support arbitrary keys/values?

    Read the article

  • GUID or int entity key with SQL Compact/EF4?

    - by David Veeneman
    This is a follow-up to an earlier question I posted on EF4 entity keys with SQL Compact. SQL Compact doesn't allow server-generated identity keys, so I am left with creating my own keys as objects are added to the ObjectContext. My first choice would be an integer key, and the previous answer linked to a blog post that shows an extension method that uses the Max operator with a selector expression to find the next available key: public static TResult NextId<TSource, TResult>(this ObjectSet<TSource> table, Expression<Func<TSource, TResult>> selector) where TSource : class { TResult lastId = table.Any() ? table.Max(selector) : default(TResult); if (lastId is int) { lastId = (TResult)(object)(((int)(object)lastId) + 1); } return lastId; } Here's my take on the extension method: It will work fine if the ObjectContext that I am working with has an unfiltered entity set. In that case, the ObjectContext will contain all rows from the data table, and I will get an accurate result. But if the entity set is the result of a query filter, the method will return the last entity key in the filtered entity set, which will not necessarily be the last key in the data table. So I think the extension method won't really work. At this point, the obvious solution seems to be to simply use a GUID as the entity key. That way, I only need to call Guid.NewGuid() method to set the ID property before I add a new entity to my ObjectContext. Here is my question: Is there a simple way of getting the last primary key in the data store from EF4 (without having to create a second ObjectContext for that purpose)? Any other reason not to take the easy way out and simply use a GUID? Thanks for your help.

    Read the article

  • implementing the generic interface

    - by user845405
    Could any one help me on implementing the generic interface for this class. I want to be able to use the below Cache class methods through an interface. Thank you for the help!. public class CacheStore { private Dictionary<string, object> _cache; private object _sync; public CacheStore() { _cache = new Dictionary<string, object>(); _sync = new object(); } public bool Exists<T>(string key) where T : class { Type type = typeof(T); lock (_sync) { return _cache.ContainsKey(type.Name + key); } } public bool Exists<T>() where T : class { Type type = typeof(T); lock (_sync) { return _cache.ContainsKey(type.Name); } } public T Get<T>(string key) where T : class { Type type = typeof(T); lock (_sync) { if (_cache.ContainsKey(key + type.Name) == false) throw new ApplicationException(String.Format("An object with key '{0}' does not exists", key)); lock (_sync) { return (T)_cache[key + type.Name]; } } } public void Add<T>(string key, T value) { Type type = typeof(T); if (value.GetType() != type) throw new ApplicationException(String.Format("The type of value passed to cache {0} does not match the cache type {1} for key {2}", value.GetType().FullName, type.FullName, key)); lock (_sync) { if (_cache.ContainsKey(key + type.Name)) throw new ApplicationException(String.Format("An object with key '{0}' already exists", key)); lock (_sync) { _cache.Add(key + type.Name, value); } } } } Could any one help me on implementing the generic interface for this class. I want to be able to use the below Cache class methods through an interface.

    Read the article

  • Mysql - help me optimize this query

    - by sandeepan-nath
    About the system: -The system has a total of 8 tables - Users - Tutor_Details (Tutors are a type of User,Tutor_Details table is linked to Users) - learning_packs, (stores packs created by tutors) - learning_packs_tag_relations, (holds tag relations meant for search) - tutors_tag_relations and tags and orders (containing purchase details of tutor's packs), order_details linked to orders and tutor_details. For a more clear idea about the tables involved please check the The tables section in the end. -A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is a simpler representation (not the actual) of the more complex query which I am trying to optimize:- I have used statements like explanation of parts in the query select SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) as key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) as key_2_total_matches, td., u., count(distinct(od.id_od)), if (lp.id_lp > 0) then some conditional logic on lp fields else 0 as tutor_popularity from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN `some other tables on lp.id_lp - let's call learning pack tables set (including Learning_Packs table)` LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) where some condition on Users table's fields AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN `some conditions on learning pack tables set` ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN `some conditions on webclasses tables set` ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and some conditions on Orders table's fields ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%") group by td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 ===================================================================== What does the above query do? Does AND logic search on the search keywords (2 in this example - "Democracy" and "Dictatorship"). Returns only those tutors for which both the keywords are present in the union of the two sets - tutors details and details of all the packs created by a tutor. To make things clear - Suppose a Tutor name "Sandeepan Nath" has created a pack "My first pack", then:- Searching "Sandeepan Nath" returns Sandeepan Nath. Searching "Sandeepan first" returns Sandeepan Nath. Searching "Sandeepan second" does not return Sandeepan Nath. ====================================================================================== The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query on heavily loaded databases is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed, but I would appreciate a better query as a solution, optimized as much as possible, displaying the same results ========================================================================================== How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. ==================================================================== The tables Most of the following tables contain many other fields which I have omitted here. CREATE TABLE IF NOT EXISTS users ( id_user int(10) unsigned NOT NULL AUTO_INCREMENT, name varchar(100) NOT NULL DEFAULT '', surname varchar(155) NOT NULL DEFAULT '', PRIMARY KEY (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=636 ; CREATE TABLE IF NOT EXISTS tutor_details ( id_tutor int(10) NOT NULL AUTO_INCREMENT, id_user int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_tutor), KEY Users_FKIndex1 (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=51 ; CREATE TABLE IF NOT EXISTS orders ( id_order int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (id_order), KEY Orders_FKIndex1 (id_user), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=275 ; ALTER TABLE orders ADD CONSTRAINT Orders_ibfk_1 FOREIGN KEY (id_user) REFERENCES users (id_user) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS order_details ( id_od int(10) unsigned NOT NULL AUTO_INCREMENT, id_order int(10) unsigned NOT NULL DEFAULT '0', id_author int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_od), KEY Order_Details_FKIndex1 (id_order) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=284 ; ALTER TABLE order_details ADD CONSTRAINT Order_Details_ibfk_1 FOREIGN KEY (id_order) REFERENCES orders (id_order) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs ( id_lp int(10) unsigned NOT NULL AUTO_INCREMENT, id_author int(10) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (id_lp), KEY Learning_Packs_FKIndex2 (id_author), KEY id_lp (id_lp) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=23 ; CREATE TABLE IF NOT EXISTS tags ( id_tag int(10) unsigned NOT NULL AUTO_INCREMENT, tag varchar(255) DEFAULT NULL, PRIMARY KEY (id_tag), UNIQUE KEY tag (tag), KEY id_tag (id_tag), KEY tag_2 (tag), KEY tag_3 (tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3419 ; CREATE TABLE IF NOT EXISTS tutors_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, KEY Tutors_Tag_Relations (id_tag), KEY id_tutor (id_tutor), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE tutors_tag_relations ADD CONSTRAINT Tutors_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, id_lp int(10) unsigned DEFAULT NULL, KEY Learning_Packs_Tag_Relations_FKIndex1 (id_tag), KEY id_lp (id_lp), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE learning_packs_tag_relations ADD CONSTRAINT Learning_Packs_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; =================================================================================== Following is the exact query (this includes classes also - tutors can create classes and search terms are matched with classes created by tutors):- select count(distinct(od.id_od)) as tutor_popularity, CASE WHEN (IF((wc.id_wc 0), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT'))), 0)) THEN 1 ELSE 0 END as 'classes_published', CASE WHEN (IF((lp.id_lp 0), (lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT'))),0)) THEN 1 ELSE 0 END as 'packs_published', td . * , u . * from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN Learning_Packs_Categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN Learning_Packs_Categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN Learning_Pack_Content as lpct on (lp.id_lp = lpct.id_lp) LEFT JOIN Webclasses_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN WebClasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN Learning_Packs_Categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN Learning_Packs_Categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) OR (t.id_tag = wtagrels.id_tag) where (u.country='IE' or u.country IN ('INT')) AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and o.order_status = 'paid' and CASE WHEN (od.id_wc 0) THEN od.can_attend_class=1 ELSE 1 END ELSE 1 END AND 1 group by td.id_tutor order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 Please note - The provided database structure does not show all the fields and tables as in this query

    Read the article

  • Can anyone explain why my crypto++ decrypted file 16 bytes short?

    - by Tom Williams
    I suspect it might be too much to hope for, but can anyone with experience with crypto++ explain why the "decrypted.out" file created by main() is 16 characters short (which probably not coincidentally is the block size)? I think the issue must be in CryptStreamBuffer::GetNextChar(), but I've been staring at it and the crypto++ documentation for hours. Any other comments about how crummy or naive my std::streambuf implementation are also welcome ;-) And I've just noticed I'm missing some calls to delete so you don't have to tell me about those. Thanks, Tom // Runtime Includes #include <iostream> // Crypto++ Includes #include "aes.h" #include "modes.h" // xxx_Mode< > #include "filters.h" // StringSource and // StreamTransformation #include "files.h" using namespace std; class CryptStreamBuffer: public std::streambuf { public: CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c); CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c); protected: virtual int_type overflow(int_type ch = traits_type::eof()); virtual int_type uflow(); virtual int_type underflow(); virtual int_type pbackfail(int_type ch); virtual int sync(); private: int GetNextChar(); int m_NextChar; // Buffered character CryptoPP::StreamTransformationFilter* m_StreamTransformationFilter; CryptoPP::FileSource* m_Source; CryptoPP::FileSink* m_Sink; }; // class CryptStreamBuffer CryptStreamBuffer::CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c); m_Source = new CryptoPP::FileSource(encryptedInput, false, m_StreamTransformationFilter); } CryptStreamBuffer::CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_Sink = new CryptoPP::FileSink(encryptedOutput); m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, m_Sink); } CryptStreamBuffer::int_type CryptStreamBuffer::overflow(int_type ch) { return m_StreamTransformationFilter->Put((byte)ch); } CryptStreamBuffer::int_type CryptStreamBuffer::uflow() { int_type result = GetNextChar(); // Reset the buffered character m_NextChar = traits_type::eof(); return result; } CryptStreamBuffer::int_type CryptStreamBuffer::underflow() { return GetNextChar(); } CryptStreamBuffer::int_type CryptStreamBuffer::pbackfail(int_type ch) { return traits_type::eof(); } int CryptStreamBuffer::sync() { if (m_Sink) { m_StreamTransformationFilter->MessageEnd(); } } int CryptStreamBuffer::GetNextChar() { // If we have a buffered character do nothing if (m_NextChar != traits_type::eof()) { return m_NextChar; } // If there are no more bytes currently available then pump the source // *** I SUSPECT THE PROBLEM IS HERE *** if (m_StreamTransformationFilter->MaxRetrievable() == 0) { m_Source->Pump(1024); } // Retrieve the next byte byte nextByte; size_t noBytes = m_StreamTransformationFilter->Get(nextByte); if (0 == noBytes) { return traits_type::eof(); } // Buffer up the next character m_NextChar = nextByte; return m_NextChar; } void InitKey(byte key[]) { key[0] = -62; key[1] = 102; key[2] = 78; key[3] = 75; key[4] = -96; key[5] = 125; key[6] = 66; key[7] = 125; key[8] = -95; key[9] = -66; key[10] = 114; key[11] = 22; key[12] = 48; key[13] = 111; key[14] = -51; key[15] = 112; } void DecryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Decryption decryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ifs, decryptor); std::istream decrypt(&cryptBuf); int c; while (EOF != (c = decrypt.get())) { ofs << (char)c; } ofs.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } void EncryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Encryption encryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ofs, encryptor); std::ostream encrypt(&cryptBuf); int c; while (EOF != (c = ifs.get())) { encrypt << (char)c; } encrypt.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } int main(int argc, char* argv[]) { EncryptFile(argv[1], "encrypted.out"); DecryptFile("encrypted.out", "decrypted.out"); return 0; }

    Read the article

  • Can anyone explain why my crypto++ decrypted file is 16 bytes short?

    - by Tom Williams
    I suspect it might be too much to hope for, but can anyone with experience with crypto++ explain why the "decrypted.out" file created by main() is 16 characters short (which probably not coincidentally is the block size)? I think the issue must be in CryptStreamBuffer::GetNextChar(), but I've been staring at it and the crypto++ documentation for hours. Any other comments about how crummy or naive my std::streambuf implementation are also welcome ;-) And I've just noticed I'm missing some calls to delete so you don't have to tell me about those. Thanks, Tom // Runtime Includes #include <iostream> // Crypto++ Includes #include "aes.h" #include "modes.h" // xxx_Mode< > #include "filters.h" // StringSource and // StreamTransformation #include "files.h" using namespace std; class CryptStreamBuffer: public std::streambuf { public: CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c); CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c); protected: virtual int_type overflow(int_type ch = traits_type::eof()); virtual int_type uflow(); virtual int_type underflow(); virtual int_type pbackfail(int_type ch); virtual int sync(); private: int GetNextChar(); int m_NextChar; // Buffered character CryptoPP::StreamTransformationFilter* m_StreamTransformationFilter; CryptoPP::FileSource* m_Source; CryptoPP::FileSink* m_Sink; }; // class CryptStreamBuffer CryptStreamBuffer::CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c); m_Source = new CryptoPP::FileSource(encryptedInput, false, m_StreamTransformationFilter); } CryptStreamBuffer::CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_Sink = new CryptoPP::FileSink(encryptedOutput); m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, m_Sink); } CryptStreamBuffer::int_type CryptStreamBuffer::overflow(int_type ch) { return m_StreamTransformationFilter->Put((byte)ch); } CryptStreamBuffer::int_type CryptStreamBuffer::uflow() { int_type result = GetNextChar(); // Reset the buffered character m_NextChar = traits_type::eof(); return result; } CryptStreamBuffer::int_type CryptStreamBuffer::underflow() { return GetNextChar(); } CryptStreamBuffer::int_type CryptStreamBuffer::pbackfail(int_type ch) { return traits_type::eof(); } int CryptStreamBuffer::sync() { if (m_Sink) { m_StreamTransformationFilter->MessageEnd(); } } int CryptStreamBuffer::GetNextChar() { // If we have a buffered character do nothing if (m_NextChar != traits_type::eof()) { return m_NextChar; } // If there are no more bytes currently available then pump the source // *** I SUSPECT THE PROBLEM IS HERE *** if (m_StreamTransformationFilter->MaxRetrievable() == 0) { m_Source->Pump(1024); } // Retrieve the next byte byte nextByte; size_t noBytes = m_StreamTransformationFilter->Get(nextByte); if (0 == noBytes) { return traits_type::eof(); } // Buffer up the next character m_NextChar = nextByte; return m_NextChar; } void InitKey(byte key[]) { key[0] = -62; key[1] = 102; key[2] = 78; key[3] = 75; key[4] = -96; key[5] = 125; key[6] = 66; key[7] = 125; key[8] = -95; key[9] = -66; key[10] = 114; key[11] = 22; key[12] = 48; key[13] = 111; key[14] = -51; key[15] = 112; } void DecryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Decryption decryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ifs, decryptor); std::istream decrypt(&cryptBuf); int c; while (EOF != (c = decrypt.get())) { ofs << (char)c; } ofs.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } void EncryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Encryption encryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ofs, encryptor); std::ostream encrypt(&cryptBuf); int c; while (EOF != (c = ifs.get())) { encrypt << (char)c; } encrypt.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } int main(int argc, char* argv[]) { EncryptFile(argv[1], "encrypted.out"); DecryptFile("encrypted.out", "decrypted.out"); return 0; }

    Read the article

  • Settings schema 'gnome.org.desktop.a11y.magnifier' does not contain a key named 'invert-lightness' Error when using GNOME

    - by user1105047
    I have just installed the gnome-shell on my ubuntu 12.04. When I login I get this error: GLib-GIO-ERROR: **: Settings schema 'gnome.org.desktop.a11y.magnifier' does not contain a key named 'invert-lightness' Does anyone know how to fix this? Because of this error the gnome-shell doesn't start at all! When I installed it I followed these instructions: http://www.filiwiese.com/installing-gnome-on-ubuntu-12-04-precise-pangolin/

    Read the article

  • Need Help in Pointing to focus on the Key elements in Code Review Phase?

    - by Sankar Ganesh
    Hi Friends, Let us share your views on the Code Review process, If someone gave a code snippet and ask you to review that code, then what are the major things you will focus on that code Review process. For Instance: I will check any dead code is available in that code, other than Checking Dead Code, what are the key elements to be focused on CODE REVIEW PROCESS. Thanks For Sharing Your Views Sankar Ganesh.S

    Read the article

  • Can I define keyboard shortcuts using the Super key?

    - by Rasmus
    In 10.04, I had a lot of keyboard shortcuts defined using Super/Mod4 and one single other key, Super+O ran Opera Super+W opened Nautilus pointing to my Work folder, etc. In 11.04, these do not seem to work -- only Super+R works to run the terminal, and Super+Shift+W successfully runs Nautilus. Is there some way I can get these to function again? Adding them in Keyboard Shortcuts does not work, and neither does adding commands in CompizConfig Settings Manager.

    Read the article

  • how to detect keylogger in windows that hooked up key-press?

    - by saber tabatabaee yazdi
    For security reasons we have to detect all key-loggers and log them in somewhere like windows events. I have piece of C# code that it is very easy to install all clients and up and running every day in system trays and no one can close it. We want to modify that code and send logs to central web service in our network (that this also web service is installed last year and receive and log all another security logs).

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >