Search Results

Search found 21336 results on 854 pages for 'db api'.

Page 756/854 | < Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >

  • hadoop implementing a generic list writable

    - by Guruprasad Venkatesh
    I am working on building a map reduce pipeline of jobs(with one MR job's output feeding to another as input). The values being passed around are fairly complex, in that there are lists of different types and hash maps with values as lists. Hadoop api does not seem to have a ListWritable. Am trying to write a generic one, but it seems i can't instantiate a generic type in my readFields implementation, unless i pass in the class type itself: public class ListWritable<T extends Writable> implements Writable { private List<T> list; private Class<T> clazz; public ListWritable(Class<T> clazz) { this.clazz = clazz; list = new ArrayList<T>(); } @Override public void write(DataOutput out) throws IOException { out.writeInt(list.size()); for (T element : list) { element.write(out); } } @Override public void readFields(DataInput in) throws IOException{ int count = in.readInt(); this.list = new ArrayList<T>(); for (int i = 0; i < count; i++) { try { T obj = clazz.newInstance(); obj.readFields(in); list.add(obj); } catch (InstantiationException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } } } } But hadoop requires all writables to have a no argument constructor to read the values back. Has anybody tried to do the same and solved this problem? TIA.

    Read the article

  • How to make Web Storage persistent in Cordova using JS?

    - by ett
    I have a small quiz app, which is a cross-platform mobile app, that I plan for it to run on Android, iOS, and WP8. I want to store a local highscore, where it will keep track how many points the user had, and if he/she does better than the already stored highscore, update the current highscore. Though, I want the highscore to be persistent, what I mean is, every time the user opens the app I want the last highscore to be present and it to be compared with the new quiz score. Meaning, I don't want the highscore to be deleted after each time the app is closed. I also want for the first time when the app is ran, the highscore to be set to 0, so obviously the first time the user finishes the quiz gets a new highscore. I have those codes so far: scoredb.js // Wait for device API libraries to load document.addEventListener("deviceready", initScoreDB, false); // Device APIs are available function initScoreDB() { window.localStorage.setItem("score", 0); highscore = window.localStorage.getItem("score"); } main.js var correct = 0; var highscore; // In between I have some code that keeps incrementing // correct variable for each correct answer. if (correct > highscore) { window.localStorage.setItem("score", correct); highscore = correct; } It does seem to work okay once the app is started. I did the quiz three times, in the simulator, and it keeps the score as it should. Though, each time I open the app, highscore is reseted to 0. I guess it is due to the fact that I call initScoreDB when the device is ready, and I initialize the score to 0 there and give that value to highscore. Can someone help me to initialize the score to 0 only when the app is ran for the first time, and all the other times to keep the latest highscore as the current highscore and compare it each time with the score that is achieved when the quiz is finished. If someone can help me, I would be glad.

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • substitution cypher with different alphabet length

    - by seanizer
    I would like to implement a simple substitution cypher to mask private ids in URLs I know how my IDs will look like (combination of upperchase ascii, digits and underscore), and they will be rather long, as they are composed keys. I would like to use a longer alphabet to shorten the resulting codes (I'd like to use upper and lower case ascii letters, digits and nothing else). So my incoming alphabet would be [A-Z0-9_] (37 chars) and my outgoing alphabet would be [A-Za-z0-9] (62 chars) so a compression of almost 50% would be available. let's say my URLs look like this: /my/page/GFZHFFFZFZTFZTF_24_F34 and I want them to look like this instead: /my/page/Ft32zfegZFV5 Obviously both arrays would be shuffled to bring some random order in. This does not have to be secure. if someone figures it out: fine, but I don't want the scheme to be obvious. My desired solution would be to convert the string to an integer representation of radix 37, convert the radix to 62 and use the second alphabet to write out that number. is there any sample code available that does something similar? Integer.parseInt ( http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#parseInt%28java.lang.String,%20int%29 ) has some similar logic, but it is hard-coded to use standard digit behavior Any hints? I am using java to implement this but code or pseudo-code in any other language is of course also helpful

    Read the article

  • mysql and trigger usage question

    - by dhruvbird
    I have a situation in which I don't want inserts to take place (the transaction should rollback) if a certain condition is met. I could write this logic in the application code, but say for some reason, it has to be written in MySQL itself (say clients written in different languages will be inserting into this MySQL InnoDB table) [that's a separate discussion]. Table definition: CREATE TABLE table1(x int NOT NULL); The trigger looks something like this: CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN NEW.x = NULL; END IF; END; I am guessing it could also be written as(untested): CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN ROLLBACK; END IF; END; But, this doesn't work: CREATE TRIGGER t1 BEFORE INSERT ON table1 ROLLBACK; You are guaranteed that: Your DB will always be MySQL Table type will always be InnoDB That NOT NULL column will always stay the way it is Question: Do you see anything objectionable in the 1st method?

    Read the article

  • Good Starting Points for Optimizing Database Calls in Ruby on Rails?

    - by viatropos
    I have a menu in Rails which grabs a nested tree of Post models, each which have a Slug model associated via a polymorphic association (using the friendly_id gem for slugs and awesome_nested_set for the tree). The database output in development looks like this (here's the full gist): SQL (0.4ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 39) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 40 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 SQL (0.3ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 40) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 41 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 ... Rendered shared/_menu.html.haml (907.6ms) What are some quick things I should always do to optimize this from the start (easy things)? Some things I'm thinking now are: Can Rails 3 eager load the whole Post tree + associated Slugs in one DB call? Can I do that easily with named scopes or custom SQL? What is best practice in this situation? Not really thinking about memcached in this situation as that can be applied to much more than just this.

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • Why does iOS 5 fail to connect to a server running JDK 1.6, but not JDK 1.5

    - by KC Baltz
    We have a Java Socket Server listening on an SSLSocket (port 443) and an iOS application that connects with it. When running on iOS 5.1, the application stopped working when we upgraded the Java version of the server from JDK 1.5 to 1.6 (or 1.7). The app connects just fine to JDK 5 and 6 when running on iOS 6. The iOS app is reporting an error: -9809 = errSSLCrypto. On the Java side, we get javax.net.ssl.SSLException: Received fatal alert: close_notify. On the Java server side, we have enabled all the available cipher suites. On the client side we have tested enabling several different suites, although we have yet to complete a test involving each one individually enabled. Right now, it is failing when we use TLS_DH_anon_WITH_AES_128_CBC_SHA although it has failed with others and we are starting to think it's not the suite. Here is the debug output. It makes it all the way to ServerHelloDone and then fails shortly thereafter: Is secure renegotiation: false [Raw read]: length = 5 0000: 16 03 03 00 41 ....A [Raw read]: length = 65 0000: 01 00 00 3D 03 03 50 83 1E 0B 56 19 25 65 C8 F2 ...=..P...V.%e.. 0010: AF 02 AD 48 FE E2 92 CF B8 D7 A6 A3 EA C5 FF 5D ...H...........] 0020: 74 0F 1B C1 99 18 00 00 08 00 FF 00 34 00 1B 00 t...........4... 0030: 18 01 00 00 0C 00 0D 00 08 00 06 05 01 04 01 02 ................ 0040: 01 . URT-, READ: Unknown-3.3 Handshake, length = 65 *** ClientHello, Unknown-3.3 RandomCookie: GMT: 1333992971 bytes = { 86, 25, 37, 101, 200, 242, 175, 2, 173, 72, 254, 226, 146, 207, 184, 215, 166, 163, 234, 197, 255, 93, 116, 15, 27, 193, 153, 24 } Session ID: {} Cipher Suites: [TLS_EMPTY_RENEGOTIATION_INFO_SCSV, TLS_DH_anon_WITH_AES_128_CBC_SHA, SSL_DH_anon_WITH_3DES_EDE_CBC_SHA, SSL_DH_anon_WITH_RC4_128_MD5] Compression Methods: { 0 } Unsupported extension signature_algorithms, data: 00:06:05:01:04:01:02:01 *** [read] MD5 and SHA1 hashes: len = 65 0000: 01 00 00 3D 03 03 50 83 1E 0B 56 19 25 65 C8 F2 ...=..P...V.%e.. 0010: AF 02 AD 48 FE E2 92 CF B8 D7 A6 A3 EA C5 FF 5D ...H...........] 0020: 74 0F 1B C1 99 18 00 00 08 00 FF 00 34 00 1B 00 t...........4... 0030: 18 01 00 00 0C 00 0D 00 08 00 06 05 01 04 01 02 ................ 0040: 01 . %% Created: [Session-1, TLS_DH_anon_WITH_AES_128_CBC_SHA] *** ServerHello, TLSv1 RandomCookie: GMT: 1333992972 bytes = { 100, 3, 56, 153, 7, 2, 251, 64, 41, 32, 66, 240, 227, 181, 55, 190, 2, 237, 146, 0, 73, 119, 70, 0, 160, 9, 28, 207 } Session ID: {80, 131, 30, 12, 241, 73, 52, 38, 46, 41, 237, 226, 199, 246, 156, 45, 3, 247, 182, 43, 223, 8, 49, 169, 188, 63, 160, 41, 102, 199, 50, 190} Cipher Suite: TLS_DH_anon_WITH_AES_128_CBC_SHA Compression Method: 0 Extension renegotiation_info, renegotiated_connection: <empty> *** Cipher suite: TLS_DH_anon_WITH_AES_128_CBC_SHA *** Diffie-Hellman ServerKeyExchange DH Modulus: { 233, 230, 66, 89, 157, 53, 95, 55, 201, 127, 253, 53, 103, 18, 11, 142, 37, 201, 205, 67, 233, 39, 179, 169, 103, 15, 190, 197, 216, 144, 20, 25, 34, 210, 195, 179, 173, 36, 128, 9, 55, 153, 134, 157, 30, 132, 106, 171, 73, 250, 176, 173, 38, 210, 206, 106, 34, 33, 157, 71, 11, 206, 125, 119, 125, 74, 33, 251, 233, 194, 112, 181, 127, 96, 112, 2, 243, 206, 248, 57, 54, 148, 207, 69, 238, 54, 136, 193, 26, 140, 86, 171, 18, 122, 61, 175 } DH Base: { 48, 71, 10, 213, 160, 5, 251, 20, 206, 45, 157, 205, 135, 227, 139, 199, 209, 177, 197, 250, 203, 174, 203, 233, 95, 25, 10, 167, 163, 29, 35, 196, 219, 188, 190, 6, 23, 69, 68, 64, 26, 91, 44, 2, 9, 101, 216, 194, 189, 33, 113, 211, 102, 132, 69, 119, 31, 116, 186, 8, 77, 32, 41, 216, 60, 28, 21, 133, 71, 243, 169, 241, 162, 113, 91, 226, 61, 81, 174, 77, 62, 90, 31, 106, 112, 100, 243, 22, 147, 58, 52, 109, 63, 82, 146, 82 } Server DH Public Key: { 8, 60, 59, 13, 224, 110, 32, 168, 116, 139, 246, 146, 15, 12, 216, 107, 82, 182, 140, 80, 193, 237, 159, 189, 87, 34, 18, 197, 181, 252, 26, 27, 94, 160, 188, 162, 30, 29, 165, 165, 68, 152, 11, 204, 251, 187, 14, 233, 239, 103, 134, 168, 181, 173, 206, 151, 197, 128, 65, 239, 233, 191, 29, 196, 93, 80, 217, 55, 81, 240, 101, 31, 119, 98, 188, 211, 52, 146, 168, 127, 127, 66, 63, 111, 198, 134, 70, 213, 31, 162, 146, 25, 178, 79, 56, 116 } Anonymous *** ServerHelloDone [write] MD5 and SHA1 hashes: len = 383 0000: 02 00 00 4D 03 01 50 83 1E 0C 64 03 38 99 07 02 ...M..P...d.8... 0010: FB 40 29 20 42 F0 E3 B5 37 BE 02 ED 92 00 49 77 .@) B...7.....Iw 0020: 46 00 A0 09 1C CF 20 50 83 1E 0C F1 49 34 26 2E F..... P....I4&. 0030: 29 ED E2 C7 F6 9C 2D 03 F7 B6 2B DF 08 31 A9 BC ).....-...+..1.. 0040: 3F A0 29 66 C7 32 BE 00 34 00 00 05 FF 01 00 01 ?.)f.2..4....... 0050: 00 0C 00 01 26 00 60 E9 E6 42 59 9D 35 5F 37 C9 ....&.`..BY.5_7. 0060: 7F FD 35 67 12 0B 8E 25 C9 CD 43 E9 27 B3 A9 67 ..5g...%..C.'..g 0070: 0F BE C5 D8 90 14 19 22 D2 C3 B3 AD 24 80 09 37 ......."....$..7 0080: 99 86 9D 1E 84 6A AB 49 FA B0 AD 26 D2 CE 6A 22 .....j.I...&..j" 0090: 21 9D 47 0B CE 7D 77 7D 4A 21 FB E9 C2 70 B5 7F !.G...w.J!...p.. 00A0: 60 70 02 F3 CE F8 39 36 94 CF 45 EE 36 88 C1 1A `p....96..E.6... 00B0: 8C 56 AB 12 7A 3D AF 00 60 30 47 0A D5 A0 05 FB .V..z=..`0G..... 00C0: 14 CE 2D 9D CD 87 E3 8B C7 D1 B1 C5 FA CB AE CB ..-............. 00D0: E9 5F 19 0A A7 A3 1D 23 C4 DB BC BE 06 17 45 44 ._.....#......ED 00E0: 40 1A 5B 2C 02 09 65 D8 C2 BD 21 71 D3 66 84 45 @.[,..e...!q.f.E 00F0: 77 1F 74 BA 08 4D 20 29 D8 3C 1C 15 85 47 F3 A9 w.t..M ).<...G.. 0100: F1 A2 71 5B E2 3D 51 AE 4D 3E 5A 1F 6A 70 64 F3 ..q[.=Q.M>Z.jpd. 0110: 16 93 3A 34 6D 3F 52 92 52 00 60 08 3C 3B 0D E0 ..:4m?R.R.`.<;.. 0120: 6E 20 A8 74 8B F6 92 0F 0C D8 6B 52 B6 8C 50 C1 n .t......kR..P. 0130: ED 9F BD 57 22 12 C5 B5 FC 1A 1B 5E A0 BC A2 1E ...W"......^.... 0140: 1D A5 A5 44 98 0B CC FB BB 0E E9 EF 67 86 A8 B5 ...D........g... 0150: AD CE 97 C5 80 41 EF E9 BF 1D C4 5D 50 D9 37 51 .....A.....]P.7Q 0160: F0 65 1F 77 62 BC D3 34 92 A8 7F 7F 42 3F 6F C6 .e.wb..4....B?o. 0170: 86 46 D5 1F A2 92 19 B2 4F 38 74 0E 00 00 00 .F......O8t.... URT-, WRITE: TLSv1 Handshake, length = 383 [Raw write]: length = 388 0000: 16 03 01 01 7F 02 00 00 4D 03 01 50 83 1E 0C 64 ........M..P...d 0010: 03 38 99 07 02 FB 40 29 20 42 F0 E3 B5 37 BE 02 .8....@) B...7.. 0020: ED 92 00 49 77 46 00 A0 09 1C CF 20 50 83 1E 0C ...IwF..... P... 0030: F1 49 34 26 2E 29 ED E2 C7 F6 9C 2D 03 F7 B6 2B .I4&.).....-...+ 0040: DF 08 31 A9 BC 3F A0 29 66 C7 32 BE 00 34 00 00 ..1..?.)f.2..4.. 0050: 05 FF 01 00 01 00 0C 00 01 26 00 60 E9 E6 42 59 .........&.`..BY 0060: 9D 35 5F 37 C9 7F FD 35 67 12 0B 8E 25 C9 CD 43 .5_7...5g...%..C 0070: E9 27 B3 A9 67 0F BE C5 D8 90 14 19 22 D2 C3 B3 .'..g......."... 0080: AD 24 80 09 37 99 86 9D 1E 84 6A AB 49 FA B0 AD .$..7.....j.I... 0090: 26 D2 CE 6A 22 21 9D 47 0B CE 7D 77 7D 4A 21 FB &..j"!.G...w.J!. 00A0: E9 C2 70 B5 7F 60 70 02 F3 CE F8 39 36 94 CF 45 ..p..`p....96..E 00B0: EE 36 88 C1 1A 8C 56 AB 12 7A 3D AF 00 60 30 47 .6....V..z=..`0G 00C0: 0A D5 A0 05 FB 14 CE 2D 9D CD 87 E3 8B C7 D1 B1 .......-........ 00D0: C5 FA CB AE CB E9 5F 19 0A A7 A3 1D 23 C4 DB BC ......_.....#... 00E0: BE 06 17 45 44 40 1A 5B 2C 02 09 65 D8 C2 BD 21 ...ED@.[,..e...! 00F0: 71 D3 66 84 45 77 1F 74 BA 08 4D 20 29 D8 3C 1C q.f.Ew.t..M ).<. 0100: 15 85 47 F3 A9 F1 A2 71 5B E2 3D 51 AE 4D 3E 5A ..G....q[.=Q.M>Z 0110: 1F 6A 70 64 F3 16 93 3A 34 6D 3F 52 92 52 00 60 .jpd...:4m?R.R.` 0120: 08 3C 3B 0D E0 6E 20 A8 74 8B F6 92 0F 0C D8 6B .<;..n .t......k 0130: 52 B6 8C 50 C1 ED 9F BD 57 22 12 C5 B5 FC 1A 1B R..P....W"...... 0140: 5E A0 BC A2 1E 1D A5 A5 44 98 0B CC FB BB 0E E9 ^.......D....... 0150: EF 67 86 A8 B5 AD CE 97 C5 80 41 EF E9 BF 1D C4 .g........A..... 0160: 5D 50 D9 37 51 F0 65 1F 77 62 BC D3 34 92 A8 7F ]P.7Q.e.wb..4... 0170: 7F 42 3F 6F C6 86 46 D5 1F A2 92 19 B2 4F 38 74 .B?o..F......O8t 0180: 0E 00 00 00 .... [Raw read]: length = 5 0000: 15 03 01 00 02 ..... [Raw read]: length = 2 0000: 02 00 .. URT-, READ: TLSv1 Alert, length = 2 URT-, RECV TLSv1 ALERT: fatal, close_notify URT-, called closeSocket() URT-, handling exception: javax.net.ssl.SSLException: Received fatal alert: close_notify FYI, this works in iOS 6.0

    Read the article

  • Algorithm for scoring user activity

    - by ManBugra
    I have an application where users can: Write reviews about products Add comments to products Up / Down vote reviews Up / Down vote comments Every Up/Down vote is recorded in a db table. What i want to do now is to create a ranking of the most active users in the last 4 weeks. Of course good reviews should be weighted more than good comments. But also e.g. 10 good comments should be weighted more than just one good review. Example: // reviews created in recent 4 weeks //format: [ upVoteCount, downVoteCount ] var reviews = [ [120,23], [32,12], [12,0], [23,45] ]; // comments created in recent 4 weeks // format: [ upVoteCount, downVoteCount ] var comments = [ [1,2], [322,1], [0,0], [0,45] ]; // create weight vector // format: [ reviewWeight, commentsWeight ] var weight = [0.60, 0.40]; // signature: activties..., activityWeight var userActivityScore = score(reviews, comments, weight); ... update user table ... List<Users> users = "from users u order by u.userActivityScore desc"; How would a fair scoring function look like? How could an implementation of the score() function look like? How to add a weight g to the function so that reviews are weighted heavier? How would such a function look like if, for example, votes for pictures would be added?

    Read the article

  • AJAX Autosave

    - by antony.trupe
    What's the best javascript library, or plugin or extension to a library, that has implemented autosaving functionality? The specific need is to be able to 'save' a data grid. Think gmail and Google Documents' autosave. I don't want to reinvent the wheel if its already been invented. I'm looking for an existing implementation of the magical autoSave() function. Auto-Saving:pushing to server code that saves to persistent storage, usually a DB. The server code framework is outside the scope of this question. Note that I'm not looking for an Ajax library, but a library/framework a level higher: interacts with the form itself. daemach introduced an implementation on top of jQuery @ http://ideamill.synaptrixgroup.com/?p=3. I'm not convinced it meets the lightweight and well engineered criteria though. Criteria stable, lightweight, well engineered saves onChange and/or onBlur saves no more frequently then a given number of milliseconds handles multiple updates happening at the same time doesn't save if no change has occurred since last save saves to different urls per input class Updates I've stabilized a solution. See my answer below for links.

    Read the article

  • How to store multiple cookies through PHP Curl

    - by Ahmad
    'SOUP.IO' is not providing any api. So Iam trying to use 'PHP Curl' to login and submit data through PHP. Iam able to login the website successfully(through cUrl), but when I try to submit data through cUrl, it gives me error of 'invalid user'. When I tried to analysed the code and website, I came to know that cUrl is getting values of only 1-2 cookies. Where as when I open the same page in FireFox, it shows me 6-7 cookies related to 'SOUP.IO'. Can some one guide me how to get all these 7 cookies values. Following cookies are getable by cUrl: soup_session_id Following cookies are shown in Firefox (not through cUrl): __qca, __utma, __utmb, __utmc, __utmz Following is my cUrl code: $cookie_file_path = getcwd()."/cookie/cookie.txt"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'http://www.soup.io'); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_HEADER, TRUE); curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate'); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file_path); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file_path); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) FirePHP/0.4'); curl_setopt($ch, CURLOPT_MAXREDIRS, 10); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $result = curl_exec($ch); curl_close($ch); print_r($result); ? Can some one guide me in this regards Thanks in advance

    Read the article

  • Django Database design -- Is this a good stragety for overriding defaults

    - by rh0dium
    Hi SO's I have a question on good database design practices and I would like to leverage you guys for pointers. The project started out simple. Hey we have a bunch of questions we want answered for every project (no problem) Which turned into... Hey we have so many questions can we group them into sections (yup we can do that) Which lead into.. Can we weight these questions and I don't really want some of these questions for my project (Yes but we are getting difficult) And then I'm thinking they will want to have each section have it's own weight.. Requirements So there's the requirements - For n number of project Allow a admin member the ability select the questions for a project Allow the admin member to re-weigh or use the default weights for the questions Allow the admin member to re-weight the sections Allow team members to answer the questions. So here is what I came up with. Please feel free to comment and provide better examples models.py from django.db import models from django.contrib.sites.models import Site from django.conf import settings class Section(models.Model): """ This describes the various sections for a checklist: """ name = models.CharField(max_length=64) description = models.TextField() class Question(models.Model): """ This simply provides a simple way to list out the questions. """ question = models.CharField(max_length=255) answer_type = models.CharField(max_length=16) description = models.TextField() section = models.ForeignKey(Section) class ProjectQuestion(models.Model): """ These are the questions relevant to the project """ question = models.ForeignKey(Question) answer = models.CharField(max_length=255) required = models.BooleanField(default=True) weight = models.FloatField(default = XXX) class Project(models.Model): """ Here is where we want to gather our questions """ questions = models.ManyToManyField(ProjectQuestion) Immediate questions: - When I start a project - any ideas on how to "pre-populate" the questions (and ultimately the weights) for the project? - Is there a generally accepted method for doing this process that I am missing? Basically the idea that you refer to the questions overide your own default weight, and store the answer? - It appears that a good chuck of the work will be done in the views and that a lot of checking will need to occur there? Is that OK? Again - feel free to give me better strategies!! Thanks

    Read the article

  • ActiveRecord/sqlite3 column type lost in table view?

    - by duncan
    I have the following ActiveRecord testcase that mimics my problem. I have a People table with one attribute being a date. I create a view over that table adding one column which is just that date plus 20 minutes: #!/usr/bin/env ruby %w|pp rubygems active_record irb active_support date|.each {|lib| require lib} ActiveRecord::Base.establish_connection( :adapter => "sqlite3", :database => "test.db" ) ActiveRecord::Schema.define do create_table :people, :force => true do |t| t.column :name, :string t.column :born_at, :datetime end execute "create view clowns as select p.name, p.born_at, datetime(p.born_at, '+' || '20' || ' minutes') as twenty_after_born_at from people p;" end class Person < ActiveRecord::Base validates_presence_of :name end class Clown < ActiveRecord::Base end Person.create(:name => "John", :born_at => DateTime.now) pp Person.all.first.born_at.class pp Clown.all.first.born_at.class pp Clown.all.first.twenty_after_born_at.class The problem is, the output is Time Time String When I expect the new datetime attribute of the view to be also a Time or DateTime in the ruby world. Any ideas? I also tried: create view clowns as select p.name, p.born_at, CAST(datetime(p.born_at, '+' || '20' || ' minutes') as datetime) as twenty_after_born_at from people p; With the same result.

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • Understanding MongoDB(and NoSQL in general) and How to make the best use of it

    - by Earlz
    Hello, I am beginning to think that my next project I am wanting to do would work better with a NoSQL solution. The project would either involve a ton of 2-column tables or a ton of dynamic queries with dynamically generated columns in a traditional SQL database. So I feel a NoSQL database would be much cleaner. I'm looking at MongoDB and it looks pretty promising. Anyway, I'm attempting to make sense of it all. Also, I will be using MongoMapper in Ruby. Anyway though, I'm confused as to how to layout things in such a freeform database. I've read http://stackoverflow.com/questions/2170152/nosql-best-practices and the answer there says that normalization is usually bad in a NoSQL DB. So how would be the best way of laying out say a simple blog with users, posts, and comments? My natural thought was to have 3 collections for each and then link them by a unique ID. But this apparently is wrong? So, what are some of the ways to lay out such a thing? My concern with the answer given in the other question is what if the author's name changed. You'd have to go through updating a ton of posts and comments. But is this an ok thing to do with NoSQL?

    Read the article

  • How to get resultset with stored procedure calls over two linked servers?

    - by räph
    I have problems filling a temporary table with the resultset from a procedure call on a linked server, in which again a procedure on another server is called. I have a Stored Procedure sproc1 with the following code, which calls another procedure sproc2 on a linked server. SET @sqlCommand = 'INSERT INTO #tblTemp ( ModuleID, ParamID) ' + '( SELECT * FROM OPENQUERY(' + @targetServer + ', ' + '''SET FMTONLY OFF; EXEC ' + @targetDB + '.usr.sproc2 ' + @param + ''' ) )' exec ( @sqlCommand ) Now in the called sproc2 I again call a third procedure sproc3 on another linked server, which returns my resultset. SET @sqlCommand = 'EXEC ' + @targetServer +'.database.usr.sproc3 ' + @param exec ( @sqlCommand ) The whole thing doen't work, as I get an SQL error 7391 The operation could not be performed because OLE DB provider "%ls" for linked server "%ls" was unable to begin a distributed transaction. I already checked the hints at this microsoft article, but without success. But maybe, I can change the code in sproc1. Would there be some alternative to the temp table and the open query? Just calling stored procedures from server A to B to C and returning the resultset is working (I do this often in the application). But this special case with the temp table and openquery doesn't work! Or is it just not possible what I am trying to do? The microsft article states: Check the object you refer on the destination server. If it is a view or a stored procedure, or causes an execution of a trigger, check whether it implicitly references another server. If so, the third server is the source of the problem. Run the query directly on the third server. If you cannot run the query directly on the third server, the problem is not actually with the linked server query. Resolve the underlying problem first. Is this my case? PS: I can't avoid the architecture with the three servers.

    Read the article

  • Can I force Apache 2.2 connection close from inside a C module?

    - by Amos Shapira
    Hello, We'd like to have a more fine-grained control on the connections we serve in a C++ Apache 2.2 module (on CentOS 5). One of the connections needs to stay alive for a few multiple requests, so we set "KeepAlive" to "On" and set a short keep-alive period. But for every such connection we have a few more connections from the browser which we don't need to leave behind and instead want to force them to close after a single request. Some of these connections are on different ports (so we can distinguish them by port, since KeepAlive can be set per virtual host) and some request a different URL (so we can tell from the path and parameters that we don't want to leave them behind). Also for the one we do want to keep alive, we know that after a certain request we'd like to close it too. But so far the only way we found to "cancel" the keep-alive is to send a polite "Connection: close" header to the client. If the client is not well behaved, or malicious, then they can keep it open and waste our resources. Is there a way to tell Apache to close the connection from the server side? The documentation advises against just plain close(2) call on the socket since Apache needs to do some clean up before that's done. But is there some API or a trick to "override" the static "KeepAlive On" configuration dynamically (and convince Apache to call close(2))? Thanks.

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • Do I really need bindParam?

    - by sandelius
    Hi there! I'm trying to do a little PDO CRUD to learn some PDO. I have a question about bindParam. Here's my update method right now: public static function update($conditions = array(), $data = array(), $table = '') { self::instance(); // Late static bindings (PHP 5.3) $table = ($table === '') ? self::table() : $table; // Check which data array we want to use $values = (empty($data)) ? self::$_fields : $data; $sql = "UPDATE $table SET "; foreach ($values as $f => $v) { $sql .= "$f = ?, "; } // let's build the conditions self::build_conditions($conditions); // fix our WHERE, AND, OR, LIKE conditions $extra = self::$condition_string; // querystring $sql = rtrim($sql, ', ') . $extra; // let's merge the arrays into on $v_val = array_values($values); $c_val = array_values($conditions); $array = array_merge($v_val, self::$condition_array); $stmt = self::$db->prepare($sql); return $stmt->execute($array); } in my "self::$condition_array" I get all the right values from the ?. SO the query looks like this: UPDATE table SET this = ?, another = ? WHERE title = ? AND time = ? as you can see I dont use bindParams instead I pass the right values in the right order ($array) directly into the execute($array) method. This works like a charm BUT is it safe not use use bindParam here? If not then how can I do it? Thanks from Sweden Tobias

    Read the article

  • How to increase the performance of a loop which runs for every 'n' minutes.

    - by GustlyWind
    Hi Giving small background to my requirement and what i had accomplished so far: There are 18 Scheduler tasks run at regular intervals (least being 30 mins) takes input of nearly 5000 eligible employees run into a static method for iteration and generates a mail content for that employee and mails. An average task takes about 9 min multiplied by 18 will be roughly 162 mins meanwhile there would be next tasks which will be in queue (I assume). So my plan is something like the below loop try { // Handle Arraylist of alerts eligible employees Iterator employee = empIDs.iterator(); while (employee.hasNext()) { ScheduledImplementation.getInstance().sendAlertInfoToEmpForGivenAlertType((Long)employee.next(), configType,schedType); } } catch (Exception vEx) { _log.error("Exception Caught During sending " + configType + " messages:" + configType, vEx); } Since I know how many employees would come to my method I will divide the while loop into two and perform simultaneous operations on two or three employees at a time. Is this possible. Or is there any other ways I can improve the performance. Some of the things I had implemented so far 1.Wherever possible made methods static and variables too Didn't bother to catch exceptions and send back because these are background tasks. (And I assume this improves performance) Get the DB values in one query instead of multiple hits. If am successful in optimizing the while loop I think i can save couple of mins. Thanks

    Read the article

  • Return Double from Boost thread

    - by Benedikt Wutzi
    Hi I have an Boost thread which should return a double. The function looks like this: void analyser::findup(const double startwl, const double max, double &myret){ this->data.begin(); for(int i = (int)data.size() ; i >= 0;i--){ if(this->data[i].lambda > startwl){ if(this->data[i].db >= (max-30)) { myret = this->data[i+1].lambda; std::cout <<"in thread " << myret << std::endl; return; } } } } this function is called by another function: void analyser::start_find_up(const double startwl, const double max){ double tmp = -42.0; boost::thread up(&analyser::findup,*this, startwl,max,tmp); std::cout << "before join " << tmp << std::endl; up.join(); std::cout << "after join " << tmp << std::endl; } Anyway I've tried and googled almost anything but i can't get it to return a value. The output looks like this right now. before join -42 in thread 843.487 after join -42 Thanks for any help.

    Read the article

  • How to handle update events on a ASP.NET GridView?

    - by Bogdan M
    Hello, This may sound silly, but I need to find out how to handle an Update event from a GridView. First of all, I have a DataSet, where there is a typed DataTable with a typed TableAdapter, based on a "select all query", with auto-generated Insert, Update, and Delete methods. Then, in my aspx page, I have an ObjectDataSource related to my typed TableAdapter on Select, Insert, Update and Delete methods. Finnally, I have a GridView bound to this ObjectDataSource, with default Edit, Update and Cancel links. How should I implement the edit functionality? Should I have something like this? protected void GridView_RowEditing(object sender, GridViewEditEventArgs e) { using(MyTableAdapter ta = new MyTableAdapter()) { ta.Update(...); TypedDataTable dt = ta.GetRecords(); this.GridView.DataSource = dt; this.GridView.DataBind(); } } In this scenario, I have the feeling that I update some changes to the DB, then I retrive and bind all the data, and not only the modified parts. Is there any way to update only the DataSet, and this to update on his turn the DataBase and the GridView? I do not want to retrive all the data after a CRUD operations is performed, I just want to retrive the changes made. Thanks. PS: I'm using .NET 3.5 and VS 2008 with SP1.

    Read the article

  • Eager loading OneToMany in Hibernate with JPA2

    - by pihentagy
    I have a simple @OneToMany between Person and Pet entities: @OneToMany(mappedBy="owner", cascade=CascadeType.ALL, fetch=FetchType.EAGER) public Set<Pet> getPets() { return pets; } I would like to load all Persons with associated Pets. So I came up with this (inside a test class): @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class AppTest { @Test @Rollback(false) @Transactional(readOnly = false) public void testApp() { CriteriaBuilder qb = em.getCriteriaBuilder(); CriteriaQuery<Person> c = qb.createQuery(Person.class); Root<Person> p1 = c.from(Person.class); SetJoin<Person, Pet> join = p1.join(Person_.pets); TypedQuery<Person> q = em.createQuery(c); List<Person> persons = q.getResultList(); for (Person p : persons) { System.out.println(p.getName()); for (Pet pet : p.getPets()) { System.out.println("\t" + pet.getNick()); } } However, turning the SQL logging on shows, that it executes 3 queries (having 2 Persons in the DB). Hibernate: select person0_.id as id0_, person0_.name as name0_, person0_.sex as sex0_ from Person person0_ inner join Pet pets1_ on person0_.id=pets1_.owner_id Hibernate: select pets0_.owner_id as owner3_0_1_, pets0_.id as id1_, pets0_.id as id1_0_, pets0_.nick as nick1_0_, pets0_.owner_id as owner3_1_0_ from Pet pets0_ where pets0_.owner_id=? Hibernate: select pets0_.owner_id as owner3_0_1_, pets0_.id as id1_, pets0_.id as id1_0_, pets0_.nick as nick1_0_, pets0_.owner_id as owner3_1_0_ from Pet pets0_ where pets0_.owner_id=? Any tips? Thanks Gergo

    Read the article

  • Finds in Rails 3 and ActiveRelation

    - by TheDelChop
    Guys, I'm trying to understand the new arel engine in Rails 3 and I've got a question. I've got two models, User and Task class User < ActiveRecord::Base has_many :tasks end class Task < ActiveRecord::Base belongs_to :user end here is my routes to imply the relation: resources :users do resources :tasks end and here is my Tasks controller: class TasksController < ApplicationController before_filter :load_user def new @task = @user.tasks.new end private def load_user @user = User.where(:id => params[:user_id]) end end Problem is, I get the following error when I try to invoke the new action: NoMethodError: undefined method `tasks' for #<ActiveRecord::Relation:0x3dc2488> I am sure my problem is with the new arel engine, does anybody understand what I'm doing wrong? Sorry guys, here is my schema.db file: ActiveRecord::Schema.define(:version => 20100525021007) do create_table "tasks", :force => true do |t| t.string "name" t.integer "estimated_time" t.datetime "created_at" t.datetime "updated_at" t.integer "user_id" end create_table "users", :force => true do |t| t.string "email", :default => "", :null => false t.string "encrypted_password", :limit => 128, :default => "", :null => false t.string "password_salt", :default => "", :null => false t.string "reset_password_token" t.string "remember_token" t.datetime "remember_created_at" t.integer "sign_in_count", :default => 0 t.datetime "current_sign_in_at" t.datetime "last_sign_in_at" t.string "current_sign_in_ip" t.string "last_sign_in_ip" t.datetime "created_at" t.datetime "updated_at" t.string "username" end add_index "users", ["email"], :name => "index_users_on_email", :unique => true add_index "users", ["reset_password_token"], :name => "index_users_on_reset_password_token", :unique => true add_index "users", ["username"], :name => "index_users_on_username", :unique => true end Thank you, Joe

    Read the article

  • Using Silverlight for Views in ASP.Net MVC - a bad idea?

    - by bplus
    I'm currently writing a small application for use internally at my office. I started out teaching myself some MVC (I've been a C# dev for 3 years). One of the main requirements is editable grids - I quickly realised that silverlight (i have zero silverlight experience) could be a big help in this. I've managed to create a proof of concept of getting MVC and silverlight to talk back an forth by combining these two techniques: Creating a Rest API using MVC MVC SilverLight I also got some help on stackoverflow: silverlight-grids-mvc-http-post Essentially all I'm doing is embedding a silver light object in a view. Serializing the Model data as JSON and passing it to silverlight(using intit params written into the response). The silverlight object can post data back to the controller as JSON. So far this seems like it could work quite well. However I am a bit concerned that I could be painting myself into a corner with this approach, as in I don't have much experience with either technology so I'm worried I'm going get hit with something further down the line that I won't be able to work around. Has anybody else tried doing this? Any advice would be much appreciated!

    Read the article

< Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >