Search Results

Search found 58436 results on 2338 pages for 'multipartform data'.

Page 880/2338 | < Previous Page | 876 877 878 879 880 881 882 883 884 885 886 887  | Next Page >

  • Accessing the buffered request body in HttpWebRequest?

    - by Greg Beech
    By default HttpWebRequest has AllowWriteStreamBuffering set to true, which means that all data written to the request stream is buffered inside the object. I'd like to access this buffer of data after it has been written, but can't seem to find any way to do so. I'm happy for it to fail if AllowWriteStreamBuffering is false. Is there any way to do this, or is it not exposed anywhere?

    Read the article

  • Javascript form validation on client side without server side - is it safe?

    - by Vitali Ponomar
    Supose I have some form with javascript client side validation and no server side validation. If user disable javascript in his browser there will no be submit button so he can not send me any data without js enabled. But I do not know is there any way to change my validation instructions from client browser so he could send me untrusted data and make some damage to my database. Thanks in advance and sorry for my (possibly) obvious question!!!

    Read the article

  • What method should be used for searching this mysql dataset?

    - by GeoffreyF67
    I've got a mysql dataset that contains 86 million rows. I need to have a relatively fast search through this data. The data I'll be searching through is all strings. I also need to do partial matches. Now, if I have 'foobar' and search for '%oob%' I know it'll be really slow - it has to look at every row to see if there is a match. What methods can be used to speed queries like this up? G-Man

    Read the article

  • JNI - GetObjectField returns NULL

    - by Daniel
    I'm currently working on Mangler's Android implementation. I have a java class that looks like so: public class VentriloEventData { public short type; public class _pcm { public int length; public short send_type; public int rate; public byte channels; }; _pcm pcm; } The signature for my pcm object: $ javap -s -p VentriloEventData ... org.mangler.VentriloEventData$_pcm pcm; Signature: Lorg/mangler/VentriloEventData$_pcm; I am implementing a native JNI function called getevent, which will write to the fields in an instance of the VentriloEventData class. For what it's worth, it's defined and called in Java like so: public static native int getevent(VentriloEventData data); VentriloEventData data = new VentriloEventData(); getevent(data); And my JNI implementation of getevent: JNIEXPORT jint JNICALL Java_org_mangler_VentriloInterface_getevent(JNIEnv* env, jobject obj, jobject eventdata) { v3_event *ev = v3_get_event(V3_BLOCK); if(ev != NULL) { jclass event_class = (*env)->GetObjectClass(env, eventdata); // Event type. jfieldID type_field = (*env)->GetFieldID(env, event_class, "type", "S"); (*env)->SetShortField( env, eventdata, type_field, 1234 ); // Get PCM class. jfieldID pcm_field = (*env)->GetFieldID(env, event_class, "pcm", "Lorg/mangler/VentriloEventData$_pcm;"); jobject pcm = (*env)->GetObjectField( env, eventdata, pcm_field ); jclass pcm_class = (*env)->GetObjectClass(env, pcm); // Set PCM fields. jfieldID pcm_length_field = (*env)->GetFieldID(env, pcm_class, "length", "I"); (*env)->SetIntField( env, pcm, pcm_length_field, 1337 ); free(ev); } return 0; } The code above works fine for writing into the type field (that is not wrapped by the _pcm class). Once getevent is called, data.type is verified to be 1234 at the java side :) My problem is that the assertion "pcm != NULL" will fail. Note that pcm_field != NULL, which probably indicates that the signature to that field is correct... so there must be something wrong with my call to GetObjectField. It looks fine though if I compare it to the official JNI docs. Been bashing my head on this problem for the past 2 hours and I'm getting a little desperate.. hoping a different perspective will help me out on this one.

    Read the article

  • Which type RAM support Our Servers?

    - by Mikunos
    I need to increase the RAM in our DELL servers but with the lshw I cannot see if the RAM installed is a UDIMM or RDIMM. Handle 0x1100, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: DIMM_A1 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244850B Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 Handle 0x1101, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: DIMM_A2 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244855D Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 Handle 0x1102, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 2 Locator: DIMM_A3 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244853E Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 how have we do to know which is the right RAM memory to buy? thanks

    Read the article

  • Solaris 10: How to image a machine?

    - by nonot1
    I've got a Solaris 10 workstation that I'd like to create a full image backup from. The machine has 2 drives, one UFS for system root, and 1 ZFS for data storage. I intend to add a third HD to keep the backup images of both primary drives (including any zfs snapshots). The purpose is not disaster recovery, but rather to allow me to easily blow away a series of application installation/configuration changes I intend to try. What's the best way to do this? I'm not too familiar with Solaris, but have some basic Linux knowledge. I looked at CloneZilla, but it does not support Solaris. I'm OK with just a dd | gzip > image style solution, but I'd need some way to first zero-out the non-used blocks on the primary drives to aid gzip. They are are much larger than my 3rd drive, but hardly have any real data. Update to clarify: I specifically want to avoid using any file-system snapshot functionality, because part of the app configuration changes involve/depend slightly on existing and new snapshots. Ideally the full collection of snapshots should be part of the backup. Virtualization not an option, because the goal is to do performance evaluation on a very specific HW configuration. For the same reason, the spurious "back up" snapshots could skew performance data. Thank you

    Read the article

  • segfault on vector<struct>

    - by Andre
    Hello, I created a struct to hold some data and then declared a vector to hold that struct. But when I do a push_back I get damn segfault and I have no idea why! My struct is defines as: typedef struct Group { int codigo; string name; int deleted; int printers; int subpage; /*included this when it started segfaulting*/ Group(){ name.reserve(MAX_PRODUCT_LONG_NAME); } ~Group(){ name.clear(); } Group(const Group &b) { codigo = b.codigo; name = b.name; deleted = b.deleted; printers = b.printers; subpage = b.subpage; } /*end of new stuff*/ }; Originally, the struct didn't have the copy, constructor or destructor. I added them latter when I read this post below. http://stackoverflow.com/questions/676575/seg-fault-after-is-item-pushed-onto-stl-container but the end result is the same. There is one this that is bothering me as hell! When I first push some data into the vector, everything goes fine. Later on in the code when I try to push some more data into the vector, my app just segfaults! The vector is declared vector<Group> Groups and is a global variable to the file where I am using it. No externs anywhere else, etc... I can trace the error to: _M_deallocate(this->_M_impl._M_start, this->_M_impl._M_end_of_storage- this->_M_impl._M_start); in vector.tcc when I finish adding/copying the last element to the vector.... As far as I can tell. I shouldn't be needing anything to do with a copy constructor as a shallow copy should be enough for this. I'm not even allocating any space (but I did a reserve for the string to try out). I have no idea what the problem is! I'm running this code on OpenSuse 10.2 with gcc 4.1.2 I'm not really to eager to upgrade gcc because of backward compatibility issues... This code worked "perfectly" on my windows machine. I compiled it with gcc 3.4.5 mingw without any problems... help! --- ... --- :::EDIT::: I push data Group tmp_grp; (...) tmp_grp.name = "Nova "; tmp_grp.codigo=GetGroupnextcode(); tmp_grp.deleted=0; tmp_grp.printers=0; tmp_grp.subpage=0; Groups.push_back(tmp_grp);

    Read the article

  • Silverlight Application

    - by Avijit Singh
    I have prepared a Silverlight application and my database is in the server. Its running smoothly for small set of data but for large data set giving an error : Remote Serve returned an error:notFound. How can i resolve it,please any one solve this problem. I have build it in ASP.NET with C#. So kindly provide the solution in C# if possible.

    Read the article

  • Java Apache Commons users

    - by Tom Brito
    Is there anything in apache commons to convert a Object to byte array, like the following method does? public static byte[] toByteArray(Object obj) throws IOException { ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos); oos.writeObject(obj); oos.flush(); byte[] data = baos.toByteArray(); return data; } [try-finally block closing buffers were omitted to simplify]

    Read the article

  • Can this Query can be corrected or different table structure needed? (question is clear, detailed, d

    - by sandeepan
    This is a bit lengthy but I have provided sufficient details and kept things very clear. Please see if you can help. (I will surely accept answer if it solves my problem) I am sure a person experienced with this can surely help or suggest me to decide the tables structure. About the system:- There are tutors who create classes A tags based search approach is being followed Tag relations are created/edited when new tutors registers/edits profile data and when tutors create classes (this makes tutors and classes searcheable).For simplicity, let us consider only tutor name and class name are the fields which are matched against search keywords. In this example, I am considering - tutor "Sandeepan Nath" has created a class called "first class" tutor "Bob Cratchit" has created a class called "new class" Desired search results- AND logic to be appied on the search keywords and match against class and tutor data(class name + tutor name), in other words, All those classes be shown such that all the search terms are present in the class name or its tutor name. Example to be clear - Searching "first class" returns class with id_wc = 1. Working Searching "Sandeepan class" should also return class with id_wc = 1. Not working in System 2. Problem with profile editing and searching To tell in one sentence, I am facing a conflict between the ease of profile edition (edition of tag relations when tutor profiles are edited) and the ease of search logic. In the beginning, we had one table structure and search was easy but tag edition logic was very clumsy and unmaintainable(Check System 1 in the section below) . So we created separate tag relations tables to make profile edition simpler but search has become difficult. Please dump the tables so that you can run the search query I have given below and see the results. System 1 (previous system - search easy - profile edition difficult):- Only one table called All_Tag_Relations table had the all the tag relations. The tags table below is common to both systems 1 and 2. CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 1, 1, 1), (4, 2, 1, 1), (5, 3, 1, 1), (6, 4, 1, 1), (7, 6, 2, NULL), (8, 7, 2, NULL), (9, 6, 2, 2), (10, 7, 2, 2), (11, 5, 2, 2), (12, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); Please note that for every class, the tag rels of its tutor have to be duplicated. Example, for class with id_wc=1, the tag rel records with id_tag_rel = 3 and 4 are actually extras if you compare with the tag rel records with id_tag_rel = 1 and 2. System 2 (present system - profile edition easy, search difficult) Two separate tables Tutors_Tag_Relations and Webclasses_Tag_Relations have the corresponding tag relations data (Please dump into a separate database)- CREATE TABLE IF NOT EXISTS `tutors_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `tutors_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`) VALUES (1, 1, 1), (2, 2, 1), (3, 6, 2), (4, 7, 2); CREATE TABLE IF NOT EXISTS `webclasses_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `webclasses_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `webclasses_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 3, 1, 1), (2, 4, 1, 1), (3, 5, 2, 2), (4, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; insert into All_Tag_Relations select NULL,id_tag,id_tutor,NULL from Tutors_Tag_Relations; insert into All_Tag_Relations select NULL,id_tag,id_tutor,id_wc from Webclasses_Tag_Relations; Here you can see how easily tutor first name can be edited only in one place. But search has become really difficult, so on being advised to use a Temporary table, I am creating one at every search request, then dumping all the necessary data and then searching from it, I am creating this All_Tag_Relations table at search run time. Here I am just dumping all the data from the two tables Tutors_Tag_Relations and Webclasses_Tag_Relations. But, I am still not able to get classes if I search with tutor name This is the query which searches "first class". Running them on both the systems shows correct results (returns the class with id_wc = 1). SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 But, searching for "Sandeepan class" works only with the 1st system Here is the query which searches "Sandeepan class" SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =1)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =1 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 Can anybody alter this query and somehow do a proper join or something to get correct results. That solves my problem in a nice way. As you can figure out, the reason why it does not work in system 2 is that in system 1, for every class, one additional tag relation linking class and tutor name is present. e.g. for class first class, (records with id_tag_rel 3 and 4) which returns the class on searching with tutor name. So, you see the trade-off between the search and profile edition difficulty with the two systems. How do I overcome both. I have to reach a conclusion soon. So far my reasoning is it is definitely not good from a code maintainability point of view to follow the single tag rel table structure of system one, because in a real system while editing a field like "tutor qualifications", there can be as many records in tag rels table as there are words in qualification of a tutor (one word in a field = one tag relation). Now suppose a tutor has 100 classes. When he edits his qualification, all the tag rel rows corresponding to him are deleted and then as many copies are to be created (as per the new qualification data) as there are classes. This becomes particularly difficult if later more searcheable fields are added. The code cannot be robust. Is the best solution to follow system 2 (edition has to be in one table - no extra work for each and every class) and somehow re-create the all_tag_relations table like system 1 (from the tables tutor_tag_relations and webclasses_tag_relations), creating the extra tutor tag rels for each and every class by a tutor (which is currently missing in system 2's temporary all_tag_relations table). That would be a time consuming logic script. I doubt that table can be recreated without resorting to PHP sript (mysql alone cannot do that). But the problem is that running all this at search time will make search definitely slow. So, how do such systems work? How are such situations handled? I thought about we can run a cron which initiates that PHP script, say every 1 minute and replaces the existing all_tag_relations table as per new tag rels from tutor_tag_relations and webclasses_tag_relations (replaces means creates a new table, deletes the original and renames the new one as all_tag_relations, otherwise search won't work during that period- or is there any better way to that?). Anyway, the result would be that any changes by tutors will reflect in search in the next 1 minute and not immediately. An alternateve would be to initate that PHP script every time a tutor edits his profile. But here again, since many users may edit their profiles concurrently, will the creation of so many tables be a burden and can mysql make the server slow? Any help would be appreciated and working solution will be accepted as answer. Thanks, Sandeepan

    Read the article

  • Asus K55VM usb 3.0 issue

    - by user2141481
    Good day superusers! I own the above laptop and I found out that there are some unknown and unusual issues with usb 3.0 ports. I haven't noticed anything strange until now. I got a new toshiba usb 3.0 external hdd and when I try to copy larger amount of data from my disk to the external hdd, the OS(windows 7) randomly starts ignoring the external hdd. It doesn't shut it down, it kinda just stops responding but the light on the hdd is still lit. I get an error that the files cannot be copied. I have reinstalled windows 7, installed all drivers(including intel chipset drivers of course) and the issue is still present. It acts normal when copying small amount of data. Also, I heard that some intel chipsets have an issue with usb, something about the connectors not transferring power when the usb device enters some kind of "low power mode" causing the device to stop responding and you need to plug it out and in again. But the thing is, my Intel® Chief River Chipset HM76 is not on the list of affected hardware(not ENTIRELY sure though). If anyone has any idea of what the problem to this might be, I'd be greatful. Edit: The hdd works perfectly fine even for large amounts of data if plugged in the usb 2.0 port!

    Read the article

  • jQuery AJAX type: 'GET', passing value problem.

    - by Trez
    I have a jQuery AJAX call with type:'GET' like this: $.ajax({type:'GET',url:'/createUser',data:"userId=12345&userName=test"}, success:function(data){ alert('successful'); } ); In my console output is: GET:http://sample.com/createUser?userId=12345&userName=test params: userId 12345 userName test In my script i should get the value using $_GET['userId'] and $_GET['userName'] but i can't get the value passed in by ajax request using GET method. Any ideas on how to do this? thanks,

    Read the article

  • Ajax cross domain call

    - by jAndy
    Hi Folks, I know about ajax cross-domain policy. So I can't just call "http://www.google.com" over a ajax HTTP request and display the results somewhere on my site. I tried it with dataType "jsonp", that actually would work, but I get a syntax error (obviously because the received data is not json formated) Is there any other possiblity to receive/display data from a foreign domain ? iFrames follow the same policy? Kind Regards --Andy

    Read the article

  • jqgrid:cancel deleting row(s)

    - by ohana
    hi, I use jqgrid for my data, and enable user to delete rows from the grid. but when user click 'delete' button, jqgrid will popup a 'Delete' dialog to ask user if they wanna delete or cancel, how can i check if user choose 'cancel" before i really submit data deletion to the server? Thanks.

    Read the article

  • SharePoint Web Analytics not tracking usage for main application

    - by Chris W
    My SP 2010 setup is two separate applications - one for the main portal and one for MySite. Whilst WebAnalytics is tracking usage of MySite it's not showing any stats for the main Portal. The only thing it lists is the number of site collections but no page views etc. The WA service is clearly running to pick up data for MySite. In Configure web analytics and health data collection everything is ticked. I can't find any obvious settings that are different between the two applications. Where should I look to get usage tracking correctly? Edit: Having played with the date ranges I see that actually I've got no stats in the last 7 days for any site at all including MySite which has been working at some point previously. Edit: What does each service (WA Data Processing Service vs WA Web Services) do and where should they be active? At present they're both running on an App server but not on the WFEs (although they were running on WFEs previously). From what I can gather than only need to run on an App server but I find it strange that the only logged activity I see in the staging database relates to Central Admin URLs on the App server and nothing from the WFEs.

    Read the article

  • ggplot2 footnote

    - by user338714
    What is the best way to add a footnote to the bottom of a plot created with ggplot2? I've tried using a combination of the logic noted here http://www.r-bloggers.com/r-good-practice-%E2%80%93-adding-footnotes-to-graphics/ as well as the ggplot2 annotate function p + annotate("text",label="Footnote", x=unit(1,"npc") - unit(2, "mm"),y=unit(2, "mm"), just=c("right", "bottom"),gp=gpar(cex= 0.7, col=grey(.5))) but I am getting the error "Error in as.data.frame.default(x[[i]], optional = TRUE, stringsAsFactors = stringsAsFactors) : cannot coerce class c("unit.arithmetic", "unit") into a data.frame".

    Read the article

  • How to loop on a JSON object?

    - by Damiano
    Hello, I have this JSON object: {"time":"123456789", "raw":"chat_history", "data":{ "msg":[ {"time":1111111111, "user":"user1", "text":"text from user1"}, {"time":2222222222, "user":"user2", "text":"text from user2"}, {"time":3333333333, "user":"user3", "text":"text from user3"}, {"time":4444444444, "user":"user4", "text":"text from user4"} ] }} I have to create a FOR to loop the elements of data.msg and print it: I would print these results with the FOR: 11111111111 - user1 - text from users1 22222222222 - user2 - text from users2 33333333333 - user3 - text from users3 44444444444 - user4 - text from users4 Could you help me? Thank you very much

    Read the article

  • decodeURIComponent for postgres

    - by maletin
    I use decodeURIComponent and encodeURIComponent in Javascript. Before I store this data in a UTF-8-PostgreSQL-Database, I should decode them: $my_data = pg_escape_string(utf8_encode($_POST['my_data'])); I'm looking for a PostgreSQL-Function to convert Javascript-Encoded Data.

    Read the article

  • iPhone: How to Embed NSData in XML for Upload to Server?

    - by milanjansari
    Hello, I want to pass data to a server and store the file there in a database as binary data. NSData *myData = [NSData dataWithContentsOfFile:pathDoc]; pathDoc = [NSString stringWithFormat:@"<size>%d</size><type>%d</type><cdate>%@</cdate><file>%c</file><fname>File</fname>",fileSizeVal,filetype,creationDate,myData]; Any idea about this? Thanks you, Milan

    Read the article

  • Instagram API and Zend/Loader.php

    - by Jaemin Kim
    I want to use Instagram API and I found this code. <html> <head></head> <body> <h1>Popular on Instagram</h1> <?php // load Zend classes require_once 'Zend/Loader.php'; Zend_Loader::loadClass('Zend_Http_Client'); // define consumer key and secret // available from Instagram API console $CLIENT_ID = 'YOUR-CLIENT-ID'; $CLIENT_SECRET = 'YOUR-CLIENT-SECRET'; try { // initialize client $client = new Zend_Http_Client('https://api.instagram.com/v1/media/popular'); $client->setParameterGet('client_id', $CLIENT_ID); // get popular images // transmit request and decode response $response = $client->request(); $result = json_decode($response->getBody()); // display images $data = $result->data; if (count($data) > 0) { echo '<ul>'; foreach ($data as $item) { echo '<li style="display: inline-block; padding: 25px"><a href="' . $item->link . '"><img src="' . $item->images->thumbnail->url . '" /></a> <br/>'; echo 'By: <em>' . $item->user->username . '</em> <br/>'; echo 'Date: ' . date ('d M Y h:i:s', $item->created_time) . '<br/>'; echo $item->comments->count . ' comment(s). ' . $item->likes->count . ' likes. </li>'; } echo '</ul>'; } } catch (Exception $e) { echo 'ERROR: ' . $e->getMessage() . print_r($client); exit; } ?> </body> </html> In here, I found Zender/Load.php, and I've never heard about that. Is it okay to go http://www.zend.com/en/products/guard/downloads here, and download Zend for linux?? And for another question, is this code available to use Instagram API?? For last, could you let me know that is there any simple code to use Instagram API? Thank you.

    Read the article

  • free up not used space on a qcow2-image-file on kvm/qemu

    - by bmaeser
    we are using kvm/qemu with qcow2-images for our virtual machines. qcow2 has this nice feature where the image file only allocates the actually needed space by the virtual-machine. but how do i shrink back the image file, if the virtual machine's allocated space gets smaller? example: 1.) i create a new image with qcow2 format, size 100GB 2.) i use this image to install ubuntu. installation needs about 10 gb, the image-file grows up to about 10GB. nothing unexpected so far. 3.) i fill up the image with about 40 GB of additional data. the image-file grows up to 50GB. i am ok with that :-) 4.) this is where it gets strange: i delete all of the 40GB data on the image, but the image-size still eats up 50GB. question: how do i free up that 40GB of data and shrink the image to the only needed 10 GB? thanks in advance, berni

    Read the article

  • Reset ID autoincrement ? phpmyadmin

    - by Marcelo
    Hi, I was testing some data in my tables of my database, to see if there was any error, now I cleaned all the testing data, but my id (auto increment) does not start from 1 anymore, can (how do) I reset it ? Sorry for any mistake in English, and thanks for the attention.

    Read the article

  • How to simulate a dial-up connection for testing purposes?

    - by mawg
    I have to code a server app where clients open a TCP/IP socket, send some data and close the connection. The data packets are small < 100 bytes, however there is talk of having them batch their transactions and send multiple packets. How can I best simulate a dial-up ut connection (using Delphy & Indy components, just FYI)? Is it as simple as open connection wait a while (what is the definition of "a while"?) close connection

    Read the article

  • Check if the path input is URL or Local File

    - by Ahmed
    Hi All, I'm working in xmldataprovider and we have configuration value "source" this value may be local file or url like c:\data\test.xml --absolute data\test.xml --relative or url http:\mysite\test.xml how I can determine all this cases in code I'm working c#

    Read the article

< Previous Page | 876 877 878 879 880 881 882 883 884 885 886 887  | Next Page >