Search Results

Search found 89926 results on 3598 pages for 'abstract data type'.

Page 285/3598 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • how to use a tree data structure in C#

    - by matti
    I found an implementation for a tree at this SO question. Unfortunately I don't know how to use it. Also I made a change to it since LinkedList does not have Add method: delegate void TreeVisitor<T>(T nodeData); class NTree<T> { T data; List<NTree<T>> children; public NTree(T data) { this.data = data; children = new List<NTree<T>>(); } public void AddChild(T data) { children.Add(new NTree<T>(data)); } public NTree<T> GetChild(int i) { return children[i]; } public void Traverse(NTree<T> node, TreeVisitor<T> visitor) { visitor(node.data); foreach (NTree<T> kid in node.children) Traverse(kid, visitor); } } I have class named tTable and I want to store it's children and their grandchildren (...) in this tree. My need is to find immediate children and not traverse entire tree. I also might need to find children with some criteria. Let's say tTable has only name and I want to find children with names matching some criteria. tTables constructor gives name a value according to int-value (somehow). How do I use Traverse (write delegate) if I have code like this; int i = 0; Dictionary<string, NTree<tTable>> tableTreeByRootTableName = new Dictionary<string, NTree<tTable>>(); tTable aTable = new tTable(i++); tableTreeByRootTableName[aTable.Name] = new NTree(aTable); tableTreeByRootTableName[aTable.Name].AddChild(new tTable(i++)); tableTreeByRootTableName[aTable.Name].AddChild(new tTable(i++)); tableTreeByRootTableName[aTable.Name].GetChild(1).AddChild(new tTable(i++));

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • C# Reading and Writing a Char[] to and from a Byte[] - Updated with Solution

    - by Simon G
    Hi, I have a byte array of around 10,000 bytes which is basically a blob from delphi that contains char, string, double and arrays of various types. This need to be read in and updated via C#. I've created a very basic reader that gets the byte array from the db and converts the bytes to the relevant object type when accessing the property which works fine. My problem is when I try to write to a specific char[] item, it doesn't seem to update the byte array. I've created the following extensions for reading and writing: public static class CharExtension { public static byte ToByte( this char c ) { return Convert.ToByte( c ); } public static byte ToByte( this char c, int position, byte[] blob ) { byte b = c.ToByte(); blob[position] = b; return b; } } public static class CharArrayExtension { public static byte[] ToByteArray( this char[] c ) { byte[] b = new byte[c.Length]; for ( int i = 1; i < c.Length; i++ ) { b[i] = c[i].ToByte(); } return b; } public static byte[] ToByteArray( this char[] c, int positon, int length, byte[] blob ) { byte[] b = c.ToByteArray(); Array.Copy( b, 0, blob, positon, length ); return b; } } public static class ByteExtension { public static char ToChar( this byte[] b, int position ) { return Convert.ToChar( b[position] ); } } public static class ByteArrayExtension { public static char[] ToCharArray( this byte[] b, int position, int length ) { char[] c = new char[length]; for ( int i = 0; i < length; i++ ) { c[i] = b.ToChar( position ); position += 1; } return c; } } to read and write chars and char arrays my code looks like: Byte[] _Blob; // set from a db field public char ubin { get { return _tariffBlob.ToChar( 14 ); } set { value.ToByte( 14, _Blob ); } } public char[] usercaplas { get { return _tariffBlob.ToCharArray( 2035, 10 ); } set { value.ToByteArray( 2035, 10, _Blob ); } } So to write to the objects I can do: ubin = 'C'; // this will update the byte[] usercaplas = new char[10] { 'A', 'B', etc. }; // this will update the byte[] usercaplas[3] = 'C'; // this does not update the byte[] I know the reason is that the setter property is not being called but I want to know is there a way around this using code similar to what I already have? I know a possible solution is to use a private variable called _usercaplas that I set and update as needed however as the byte array is nearly 10,000 bytes in length the class is already long and I would like a simpler approach as to reduce the overall code length and complexity. Thank Solution Here's my solution should anyone want it. If you have a better way of doing then let me know please. First I created a new class for the array: public class CharArrayList : ArrayList { char[] arr; private byte[] blob; private int length = 0; private int position = 0; public CharArrayList( byte[] blob, int position, int length ) { this.blob = blob; this.length = length; this.position = position; PopulateInternalArray(); SetArray(); } private void PopulateInternalArray() { arr = blob.ToCharArray( position, length ); } private void SetArray() { foreach ( char c in arr ) { this.Add( c ); } } private void UpdateInternalArray() { this.Clear(); SetArray(); } public char this[int i] { get { return arr[i]; } set { arr[i] = value; UpdateInternalArray(); } } } Then I created a couple of extension methods to help with converting to a byte[] public static byte[] ToByteArray( this CharArrayList c ) { byte[] b = new byte[c.Count]; for ( int i = 0; i < c.Count; i++ ) { b[i] = Convert.ToChar( c[i] ).ToByte(); } return b; } public static byte[] ToByteArray( this CharArrayList c, byte[] blob, int position, int length ) { byte[] b = c.ToByteArray(); Array.Copy( b, 0, blob, position, length ); return b; } So to read and write to the object: private CharArrayList _usercaplass; public CharArrayList usercaplas { get { if ( _usercaplass == null ) _usercaplass = new CharArrayList( _tariffBlob, 2035, 100 ); return _usercaplass; } set { _usercaplass = value; _usercaplass.ToByteArray( _tariffBlob, 2035, 100 ); } } As mentioned before its not an ideal solutions as I have to have private variables and extra code in the setter but I couldnt see a way around it.

    Read the article

  • Cannot execute "LOAD DATA LOCAL INFILE" Mysql query in Rails after a connection reconnection

    - by Ngan
    On Rails 2.3.8 (but I think Rails 3 might have this issue as well, not sure): I get an error when trying to execute a LOAD DATA LOCAL INFILE query after reconnecting to a database. I have a process that parses a file that can potentially take a bit of time. During the parsing, Mysql closes the connection due to timeout. This is fine, I do a ActiveRecord::Base.verify_active_connections! and I get the connection back (I do this in several places through my app). However, running a LOAD DATA LOCAL INFILE statement, I get this error: Mysql::Error: The used command is not allowed with this MySQL version It's not a permission issue, I know that for sure. Check out my test in console: ActiveRecord::Base.connection.execute("LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users") [Sat Jan 08 00:09:29 2011] (9990) SQL (1.7ms) LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users => nil > ActiveRecord::Base.connection.disconnect! => #<Mysql:0x104c6f890> > ActiveRecord::Base.verify_active_connections! [Sat Jan 08 00:09:58 2011] (9990) SQL (0.2ms) SET SQL_AUTO_IS_NULL=0 => {...connection stuff...} > ActiveRecord::Base.connection.execute("LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users") [Sat Jan 08 00:10:00 2011] (9990) SQL (0.0ms) Mysql::Error: The used command is not allowed with this MySQL version: LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users ActiveRecord::StatementInvalid: Mysql::Error: The used command is not allowed with this MySQL version: LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users from ~/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract_adapter.rb:221:in `log' from ~/gems/activerecord-2.3.8/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute' from (irb):6 I am able to do other queries like SELECT and whatnot, and I will get the correct result. It's just this one that giving me the error. I even tested this with a fresh rails app. You'll notice that I am able to do the exact same query before the disconnect. Thanks for the help!

    Read the article

  • Filesystem fragmentation on the level of set of files

    - by trismarck
    The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive. What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks. The questions: is the type of fragmentation I mentioned relevant for the file system? is it possible to remedy this kind of fragmentation and if yes, how would you do it? Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).

    Read the article

  • PHP, MySQL, jQuery, AJAX: json data returns correct response but frontend returns error

    - by Devner
    Hi all, I have a user registration form. I am doing server side validation on the fly via AJAX. The quick summary of my problem is that upon validating 2 fields, I get error for the second field validation. If I comment first field, then the 2nd field does not show any error. It has this weird behavior. More details below: The HTML, JS and Php code are below: HTML FORM: <form id="SignupForm" action=""> <fieldset> <legend>Free Signup</legend> <label for="username">Username</label> <input name="username" type="text" id="username" /><span id="status_username"></span><br /> <label for="email">Email</label> <input name="email" type="text" id="email" /><span id="status_email"></span><br /> <label for="confirm_email">Confirm Email</label> <input name="confirm_email" type="text" id="confirm_email" /><span id="status_confirm_email"></span><br /> </fieldset> <p> <input id="sbt" type="button" value="Submit form" /> </p> </form> JS: <script type="text/javascript"> $(document).ready(function() { $("#email").blur(function() { var email = $("#email").val(); var msgbox2 = $("#status_email"); if(email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: "email="+ email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } }); } return false; }); $("#confirm_email").blur(function() { var confirm_email = $("#confirm_email").val(); var email = $("#email").val(); var msgbox3 = $("#status_confirm_email"); if(confirm_email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: 'confirm_email='+ confirm_email + '&email=' + email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } , error: function (data) { alert('Some error'); } }); } return false; }); }); </script> PHP code: <?php //check_ajax2.php if(isset($_POST['email'])) { $email = $_POST['email']; $res = mysql_query("SELECT uid FROM members WHERE email = '$email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { $success = 'y'; $msg_email = 'Email available'; } else { $success = 'n'; $msg_email = 'Email is already in use.</font>'; } print json_encode(array('success' => $success, 'msg_email' => $msg_email)); } if(isset($_POST['confirm_email'])) { $confirm_email = $_POST['confirm_email']; $email = ( isset($_POST['email']) && trim($_POST['email']) != '' ? $_POST['email'] : '' ); $res = mysql_query("SELECT uid FROM members WHERE email = '$confirm_email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { if( isset($email) && isset($confirm_email) && $email == $confirm_email ) { $success = 'y'; $msg_confirm_email = 'Email available and match'; } else { $success = 'n'; $msg_confirm_email = 'Email and Confirm Email do NOT match.'; } } else { $success = 'n'; $msg_confirm_email = 'Email already exists.'; } print json_encode(array('success' => $success, 'msg_confirm_email' => $msg_confirm_email)); } ?> THE PROBLEM: As long as I am validating the $_POST['email'] as well as $_POST['confirm_email'] in the check_ajax2.php file, the validation for confirm_email field always returns an error. With my limited knowledge of Firebug, however, I did find out that the following were the responses when I entered email and confirm_email in the fields: RESPONSE 1: {"success":"y","msg_email":"Email available"} RESPONSE 2: {"success":"y","msg_email":"Email available"}{"success":"n","msg_confirm_email":"Email and Confirm Email do NOT match."} Although the RESPONSE 2 shows that we are receiving the correct message via msg_confirm_email, in the front end, the alert 'Some error' is popping up (I have enabled the alert for debugging). I have spent 48 hours trying to change every part of the code wherever possible, but with only little success. What is weird about this is that if I comment the validation for $_POST['email'] field completely, then the validation for $_POST['confirm_email'] field is displaying correctly without any errors. If I enable it back, it is validating email field correctly, but when it reaches the point of validating confirm_email field, it is again showing me the error. I have also tried renaming success variable in check_ajax2.php page to other different names for both $_POST['email'] and $_POST['confirm_email'] but no success. I will be adding more fields in the form and validating within the check_ajax2.php page. So I am not planning on using different ajax pages for validating each of those fields (and I don't think it's smart to do it that way). I am not a jquery or AJAX guru, so all help in resolving this issue is highly appreciated. Thank you in advance.

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • Help Writing Input Data to Database With Wordpress Plugin

    - by HollerTrain
    Hi I am making a wordpress plugin where I need the Admin to enter data into a database table. I am able to install the db table when the Plugin is activated, however I can't figure out how to save the user input. I've asked on the WP forums but they're dead... Any experienced guru who can lend some guidance would be greatly appreciated. <?php /******************************************************************* * INSTALL DB TABLE - ONLY AT RUN TIME * *******************************************************************/ function ed_xml_install() { global $wpdb; $ed_xml_data = $wpdb->prefix . "ed_xml_data"; if($wpdb->get_var("SHOW TABLES LIKE '$ed_xml_data'") != $ed_xml_data) { $sql = "CREATE TABLE " . ed_xml_data . " ( id mediumint(9) NOT NULL AUTO_INCREMENT, name tinytext NOT NULL, address text NOT NULL, url VARCHAR(55) NOT NULL, phone bigint(11) DEFAULT '0' NOT NULL, UNIQUE KEY id (id) );"; require_once(ABSPATH . 'wp-admin/includes/upgrade.php'); dbDelta($sql); $name = "Example Business Name"; $address = "1234 Example Street"; $url = "http://www.google.com"; $phone = "523-3232-323232"; $insert = "INSERT INTO " . ed_xml_data . " (phone, name, address, url) " . "VALUES ('" . phone() . "','" . $wpdb->escape($name) . "','" . $wpdb->escape($address) . "', '" . $wpdb->escape($url) . "')"; $results = $wpdb->query( $insert ); } } //call the install hook register_activation_hook(__FILE__,'ed_xml_install'); /******************************************************************* * CREATE MENU, CREATE MENU CONTENT * *******************************************************************/ if ( is_admin() ){ /* place it under the ED menu */ //TODO $allowed_group = ''; /* Call the html code */ add_action('admin_menu', 'ed_xmlcreator_admin_menu'); function ed_xmlcreator_admin_menu() { add_options_page('ED XML Creator', 'ED XML Creator', 'administrator', 'ed_xml_creator', 'ed_xmlcreator_html_page'); } } /******************************************************************* * CONTENT OF MENU CONTENT * *******************************************************************/ function ed_xmlcreator_html_page() { <div> <h2>Editors Deal XML Options</h2> <p>Fill in the below information which will get passed to the .XML file.</p> <p>[<a href="" title="view XML file">view XML file</a>]</p> <form method="post" action="options.php"> <?php wp_nonce_field('update-options'); ?> <table width="510"> <!-- title --> <tr valign="top"> <th width="92" scope="row">Deal URL</th> <td width="406"> <input name="url" type="text" id="url" value="<?php echo get_option('url'); ?>" /> </td> </tr> <!-- description --> <tr valign="top"> <th width="92" scope="row">Deal Address</th> <td width="406"> <input name="address" type="text" id="address" value="<?php echo get_option('address'); ?>" /> </td> </tr> <!-- business name --> <tr valign="top"> <th width="92" scope="row">Business Phone</th> <td width="406"> <input name="phone" type="text" id="phone" value="<?php echo get_option('phone'); ?>" /> </td> </tr> <!-- address --> <tr valign="top"> <th width="92" scope="row">Business Name</th> <td width="406"> <input name="name" type="text" id="name" value="<?php echo get_option('name'); ?>" /> </td> </tr> </table> <input type="hidden" name="action" value="update" /> <input type="hidden" name="page_options" value="hello_world_data" /> <p> <input type="submit" value="<?php _e('Save Changes') ?>" /> </p> </form> </div> ?>

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Javascript terminates after trying to select data from an object passed to a function

    - by Silmaril89
    Here is my javascript: $(document).ready(function(){ var queries = getUrlVars(); $.get("mail3.php", { listid: queries["listid"], mindex: queries["mindex"] }, showData, 'html'); }); function showData(data) { var response = $(data).find("#mailing").html(); if (response == null) { $("#results").html("<h3>Server didn't respond, try again.</h3>"); } else if (response.length) { var old = $("#results").html(); old = old + "<br /><h3>" + response + "</h3>"; $("#results").html(old); var words = response.split(' '); words[2] = words[2] * 1; words[4] = words[4] * 1; if (words[2] < words[4]) { var queries = getUrlVars(); $.get("mail3.php", { listid: queries["listid"], mindex: words[2] }, function(data){showData(data);}, 'html'); } else { var done = $(data).find("#done").html(); old = old + "<br />" + done; $("#results").html(old); } } else { $("#results").html("<h3>Server responded with an empty reply, try again.</h3>"); } } function getUrlVars() { var vars = [], hash; var hashes = window.location.href.slice(window.location.href.indexOf('?') + 1).split('&'); for (var i = 0; i < hashes.length; i++) { hash = hashes[i].split('='); vars.push(hash[0]); vars[hash[0]] = hash[1]; } return vars; } After the first line in showData: var response = $(data).find("#mailing").html(); the javascript stops. If I put an alert before it, the alert pops up, after it, it doesn't pop up. There must be something wrong with using $(data), but why? Any ideas would be appreciated.

    Read the article

  • MATLAB query about for loop, reading in data and plotting

    - by mp7
    Hi there, I am a complete novice at using matlab and am trying to work out if there is a way of optimising my code. Essentially I have data from model outputs and I need to plot them using matlab. In addition I have reference data (with 95% confidence intervals) which I plot on the same graph to get a visual idea on how close the model outputs and reference data is. In terms of the model outputs I have several thousand files (number sequentially) which I open in a loop and plot. The problem/question I have is whether I can preprocess the data and then plot later - to save time. The issue I seem to be having when I try this is that I have a legend which either does not appear or is inaccurate. My code (apolgies if it not elegant): fn= xlsread(['tbobserved' '.xls']); time= fn(:,1); totalreference=fn(:,4); totalreferencelowerci=fn(:,6); totalreferenceupperci=fn(:,7); figure plot(time,totalrefrence,'-', time, totalreferencelowerci,'--', time, totalreferenceupperci,'--'); xlabel('Year'); ylabel('Reference incidence per 100,000 population'); title ('Total'); clickableLegend('Observed reference data', 'Totalreferencelowerci', 'Totalreferenceupperci','Location','BestOutside'); xlim([1910 1970]); hold on start_sim=10000; end_sim=10005; h = zeros (1,1000); for i=start_sim:end_sim %is there any way of doing this earlier to save time? a=int2str(i); incidenceFile =strcat('result_', 'Sim', '_', a, 'I_byCal_total.xls'); est_tot=importdata(incidenceFile, '\t', 1); cal_tot=est_tot.data; magnitude=1; t1=cal_tot(:,1)+1750; totalmodel=cal_tot(:,3)+cal_tot(:,5); h(a)=plot(t1,totalmodel); xlim([1910 1970]); ylim([0 500]); hold all clickableLegend(h(a),a,'Location','BestOutside') end Essentially I was hoping to have a way of reading in the data and then plot later - ie. optimise the code. I hope you might be able to help. Thanks. mp

    Read the article

  • Ping remote server and wait to get data

    - by infinity
    Hi I'm building my first application for android and I've reached a point where I can't find a solution even have no idea what to search for in Google. So the problem: I am pinging a remote server with GET request through the application passing some parameters like file_id. Then the server gives back confirmation if the file exists or error otherwise, both in plain text. The error string is $$$ERROR$$$. Actually the confirmation is JSON string that holds the path to the file. If the file doesn't exists on the server it generated the error message and start downloading the file and processing it which normally takes 10-30 seconds. What would be the best way to check if the file is ready for download? I have DownloadFile class that extends AsyncTask but before I reach the point to download the file I need the URL which is dependant on the previous request which is in the main class in the UI thread. Here is some code: public class MainActivity extends Activity { private String getInfo() { // Create a new HttpClient and Post Header HttpClient httpClient = new DefaultHttpClient(); HttpGet httpPost = new HttpGet(infoUrl); StringBuilder sb = null; String data; JSONObject jObject = null; try { HttpResponse response = httpClient.execute(httpPost); // This might be equal "$$$ERROR$$$" if no file exists sb = inputStreamToString(response.getEntity().getContent()); } catch(ClientProtocolException e) { // TODO Auto-generated catch block Log.v("Error: pushItem ClientProtocolException: ", e.toString()); } catch (IOException e) { // TODO Auto-generated catch block Log.v("Error: pushItem IOException: ", e.toString()); } // Clean the data to be complaint JSON format data = sb.toString().replace("info = ", ""); try { jObject = new JSONObject(data); data = jObject.getString("h"); fileTitle = jObject.getString("title"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } downloadUrl = String.format(downloadUrl, fileId, data); return downloadUrl; } } So my idea was to get the content and if equal to $$$ERROR$$$ go into loop until JSON data is passed but I guess there is better solution. Note: I don't have control over the server output so have to deal with what I have.

    Read the article

  • Post data to MVC3 controller without pagerefresh

    - by Smooth
    I have this script that basically has 4 select boxes, what I want is that for the 2 top select boxes, he submits the optionvalue that is selected to an action (which can be found at "ProductKoppeling/ProductKoppelingPartial"), I want to let him submit this data when I click on an option but without page refresh. I tried JSON and I tried Ajax, but I didn't get it working.. How should i do this? <script language="javascript" type="text/javascript"> function delete_1() { var answer = confirm("U staat op het punt dit product te verwijderen, wilt u doorgaan?") if (answer) { document.getElementById('Actie_1').value = '5'; document.getElementById('hpg_submit').submit(); } } function delete_2() { var answer = confirm("U staat op het punt dit product te verwijderen, wilt u doorgaan?") if (answer) { document.getElementById('Actie_2').value = '6'; document.getElementById('pg_submit').submit(); } } function delete_3() { var answer = confirm("U staat op het punt dit product te verwijderen, wilt u doorgaan?") if (answer) { document.getElementById('Actie_3').value = '6'; document.getElementById('p_submit').submit(); } } </script> <div style="width: 500px; float: left;"> @using (Html.BeginForm("ProductKoppelingPartial", "ProductKoppeling", FormMethod.Post, new { id = "onload_submit" })) { @Html.DropDownList("Klant.Id", (ViewBag.Klant as SelectList), new { onchange = "document.getElementById('onload_submit').submit()" }) } <div style="clear: both"></div> <div style="float: left;"> <b>Hoofdgroepen</b><br /> @using (Html.BeginForm("ProductKoppelingPartial", "ProductKoppeling", FormMethod.Post, new { id = "hpg_submit" })) { if (ViewBag.SelectedKlant != null) { <input type="hidden" name="Klant.Id" value="@ViewBag.SelectedKlant.Id" /> } <select style="width: 200px;" size="6" id="HoofdProductGroep" name="HoofdProductGroep.Id" onchange="document.getElementById('hpg_submit').submit();"> @foreach (var hpg in ViewBag.HoofdProductGroep) { if (ViewBag.SelectedHPG != null) { if (hpg.Id == ViewBag.SelectedHPG.Id) { <option value="@hpg.Id" selected="selected">@hpg.Naam</option> } else { <option value="@hpg.Id">@hpg.Naam</option> } } else { <option value="@hpg.Id">@hpg.Naam</option> } } </select> <input type="hidden" name="Actie" id="Actie_1" value="0" /> <br /> <img src="../../Content/toevoegen.png" style="cursor: pointer; width: 30px;" onclick="document.getElementById('Actie_1').value='1';document.getElementById('hpg_submit').submit();" /> <img src="../../Content/bewerken.png" style="cursor: pointer; float: none; width: 30px;" onclick="document.getElementById('Actie_1').value='2';document.getElementById('hpg_submit').submit();" /> <img src="../../Content/verwijderen.png" style="cursor: pointer; float: none; width: 30px;" onclick="delete_1()" /> } </div> <div style="float: right;"> <b>Groepen</b><br /> @using (Html.BeginForm("ProductKoppelingPartial", "ProductKoppeling", FormMethod.Post, new { id = "pg_submit" })) { if (ViewBag.SelectedHPG != null) { <input type="hidden" name="HoofdProductGroep.Id" value="@ViewBag.SelectedHPG.Id" /> } if (ViewBag.SelectedKlant != null) { <input type="hidden" name="Klant.Id" value="@ViewBag.SelectedKlant.Id" /> } <select size="6" style="width: 200px;" id="ProductGroep_Id" name="ProductGroep.Id" onchange="document.getElementById('pg_submit').submit();"> @foreach (var pg in ViewBag.ProductGroep) { if (ViewBag.SelectedPG != null) { if (pg.Id == ViewBag.SelectedPG.Id) { <option value="@pg.Id" selected="selected">@pg.Naam</option> } else { <option value="@pg.Id">@pg.Naam</option> } } else { <option value="@pg.Id">@pg.Naam</option> } } </select> <input type="hidden" name="Actie" id="Actie_2" value="0" /> <br /> <img src="../../Content/toevoegen.png" style="cursor: pointer; width: 30px;" onclick="document.getElementById('Actie_2').value='3';document.getElementById('pg_submit').submit();" /> <img src="../../Content/bewerken.png" style="cursor: pointer; float: none; width: 30px;" onclick="document.getElementById('Actie_2').value='4';document.getElementById('pg_submit').submit();" /> <img src="../../Content/verwijderen.png" style="cursor: pointer; float: none; width: 30px;" onclick="delete_2()" /> } </div> <div style="clear: both; height: 25px;"></div> @using (Html.BeginForm("Save", "ProductKoppeling", FormMethod.Post, new { id = "p_submit" })) { <div style="float: left"> <b>Producten</b><br /> <select size="18" style="width: 200px;" name="Product.Id"> @foreach (var p in ViewBag.Product) { <option value="@p.Id">@p.Naam</option> } </select> @if (ViewBag.SelectedPG != null) { if (ViewBag.SelectedPG.Id != null) { <input type="hidden" name="ProductGroep.Id" value="@ViewBag.SelectedPG.Id" /> } } <input type="hidden" name="Actie" id="Actie_3" value="0" /> <br /> <img src="../../Content/toevoegen.png" style="cursor: pointer; width: 30px;" onclick="document.getElementById('Actie_3').value='1';document.getElementById('p_submit').submit();" /> <img src="../../Content/bewerken.png" style="cursor: pointer; float: none; width: 30px;" onclick="document.getElementById('Actie_3').value='2';document.getElementById('p_submit').submit();" /> <img src="../../Content/verwijderen.png" style="cursor: pointer; float: none; width: 30px;" onclick="delete_3()" /> <br /> </div> <div style="float: left; width: 100px;"> <center> <br /><br /><br /><br /> <a style="cursor: pointer; float: none; color: blue; font-size: 30px;" onclick="document.getElementById('p_submit').submit();">»</a> <br /><br /><br /><br /><br /><br /><br /><br /><br /> <a style="cursor: pointer; float: none; color: blue; font-size: 30px;" onclick="document.getElementById('pgp_submit').submit();">«</a> </center> </div> } <div style="float: right;"> <b>Producten in groepen</b><br /> @using (Html.BeginForm("Delete", "ProductKoppeling", FormMethod.Post, new { id = "pgp_submit" })) { <select size="18" style="width: 200px;" name="ProductGroepProduct.Id"> @foreach (var pgp in ViewBag.ProductGroepProduct) { if (pgp != null) { if (pgp.Product != null) { <option value="@pgp.Id">@pgp.Product.Naam</option> } } } </select> } </div>

    Read the article

  • Persistence scheme & state data for low memory situations (iphone)

    - by Robin Jamieson
    What happens to state information held by a class's variable after coming back from a low memory situation? I know that views will get unloaded and then reloaded later but what about some ancillary classes & data held in them that's used by the controller that launched the view? Sample scenario in question: @interface MyCustomController: UIViewController { ServiceAuthenticator *authenticator; } -(id)initWithAuthenticator:(ServiceAuthenticator *)auth; // the user may press a button that will cause the authenticator // to post some data to the service. -(IBAction)doStuffButtonPressed:(id)sender; @end @interface ServiceAuthenticator { BOOL hasValidCredentials; // YES if user's credentials have been validated NSString *username; NSString *password; // password is not stored in plain text } -(id)initWithUserCredentials:(NSString *)username password:(NSString *)aPassword; -(void)postData:(NSString *)data; @end The app delegate creates the ServiceAuthenticator class with some user data (read from plist file) and the class logs the user with the remote service. inside MyAppDelegate's applicationDidFinishLaunching: - (void)applicationDidFinishLaunching:(UIApplication *)application { ServiceAuthenticator *auth = [[ServiceAuthenticator alloc] initWithUserCredentials:username password:userPassword]; MyCustomController *controller = [[MyCustomController alloc] initWithNibName:...]; controller.authenticator = auth; // Configure and show the window [window addSubview:..]; // make everything visible [window makeKeyAndVisible]; } Then whenever the user presses a certain button, 'MyCustomController's doStuffButtonPressed' is invoked. -(IBAction)doStuffButtonPressed:(id)sender { [authenticator postData:someDataFromSender]; } The authenticator in-turn checks to if the user is logged in (BOOL variable indicates login state) and if so, exchanges data with the remote service. The ServiceAuthenticator is the kind of class that validates the user's credentials only once and all subsequent calls to the object will be to postData. Once a low memory scenario occurs and the associated nib & MyCustomController will get unloaded -- when it's reloaded, what's the process for resetting up the 'ServiceAuthenticator' class & its former state? I'm periodically persisting all of the data in my actual model classes. Should I consider also persisting the state data in these utility style classes? Is that the pattern to follow?

    Read the article

  • SQL Server 2000 tables

    - by user40766
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Automating an SSRS 2008 R2 Report Snapshots and run report with most recent data

    - by Mr Shoubs
    I would like to automate a report snapshot, but there is only an option to take a snapshot in the Report History Tab. All the resources I've found suggest I need to go to processing options and select "Render this report from a snapshot". But I don't want to do that - when I go to a report, I want to get the most recent data. However daily at midnight I'd like to take a snapshot and store it in the history in case I want to compare the reports as of midnight for the last few weeks. Or am I doing this wrong and have to create a subscription instead? Note: this is for an auditing database and has way to much data in to query a range with more than 1 day in it - reports are restricted as such. (1 day has over 1 million rows on it's own).

    Read the article

  • Stack , data and address space limits on an Ubuntu server

    - by PaulDaviesC
    I am running an Ubuntu server which has around 5000 users. The users are allowed to SSH in to the system. So in order to cap the memory used up by a process I have capped the address space limits using limits.conf. So my question is , should I be limiting the data and stack ? I feel that is not required since I am capping address space. Are there any pitfalls if I do not cap the stack and data limits?

    Read the article

  • Need Some comparative data on Apache and IIS

    - by zod
    This is not a pure programming question but very programming related question I am not sure where should i ask. if you dont know answer please dont downvote . If you know the right place to ask please suggest or move this question We have a web application running on PHP 5 , Zend Framework , Apche . I need some comparitive data which states the above technologies and server is one of the best and thousands of domains are using this. Do you know any website i can get this type of data listing major websites developed in PHP. Major domains runs on apache Major website running on Zend A comparitive case study on Apache and IIS or PHP and .Net

    Read the article

  • How to warehouse data that is not needed from MS SQL server

    - by I__
    I have been asked to truncate a large table in MS SQL Server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • Using LDAP to store customer data

    - by mechcow
    We wish to store some data in 389 Directory Server LDAP that doesn't fit that well into the standard set of schema's that come with the product. Nothing too amazing, things like: when the customer joined are they currently active customer certificate[1] which environment they are using My question is this: should we register with OID and start writing up our own custom schema OR is there a standard schema definition not provided by Directory Server that we can download and use that would fit our needs? Should we munge/hack existing attributes and store the data among there (I'm strongly opposed to this, but would be interested in arguments about why its better than extending)? [1] I know there is a field for this userCertificate but we don't want to use it to authenticate the user for the purposes of binding Using CentOS 5.5 with 389 Directory Server 8.1

    Read the article

  • Associating multiple data with a single entry in Open Office Base

    - by idyllhands
    I'm trying to build a database that I can use to track prices of groceries on certain dates. My problem is that I cannot figure out how to have a single entry associate with multiple data. For example, carrots. The index would be carrots. Then, a few categorizing fields (ie, Produce|Vegetable) Then, I can enter a price, date that the price was valid, store that was selling for said price, etc. And the next time I buy carrots, I can just add a new set of pricing data that would be associated with the original carrots entry. I know very little about database building, so if anyone has something I could just modify, I would greatly appreciate it. Alternatively, a step by step tutorial would be great.

    Read the article

  • Can't connect to wireless router anymore due to data rate problem

    - by Jay White
    I was playing around with my wireless router, and switched the mode to a fixed mode B. Now< I can no longer assoicate to the AP. Windows does not give any particular error message, but with wireshark I see that the returned error is that the client does not support the necessary data rate. My wireless card is type n, and it is set to mode a/b/g compatible. I tried setting ot to just b, however this made no difference. How can I set the data rate of my card so that I can connect again to my AP? I would prefer not to just reset the device, as there has been some configuration done that would be a pain to redo, and as well I do not have the ISP password handy. Regardless I would like to understand this situation better.

    Read the article

  • Where do deleted items go on the hard drive?

    - by Jerry
    After reading the quote below on the Casey Anthony trial (CNN) ,I am curious about where deleted files actually go on a hard drive, how they can be seen after being deleted, and to what extent the data can be recovered (fully, partially, etc). "Earlier in the trial, experts testified that someone conducted the keyword searches on a desktop computer in the home Casey Anthony shared with her parents. The searches were found in a portion of the computer's hard drive that indicated they had been deleted, Detective Sandra Osborne of the Orange County Sheriff's Office testified Wednesday in Anthony's capital murder trial." I know some of the questions here on Super User address third party software that can used for this kind of thing, but I'm more interested in how this data can be seen after deletion, where it resides on the hard drive, etc. I find the whole topic intriguing, so any additional insight is welcome.

    Read the article

  • AWS EC2 can't execute user-data script

    - by Bloodnut
    I'm pretty new to AWS and EC2 but I want to run instances with a user script after it's booted from another instance. I have installed ec2 tools and ran the command as it's explained in various examples like here http://www.turnkeylinux.org/blog/ec2-userdata and Eric Hammond's tutorials. however when I actually use the command: "ec2-run-instances --key my-key --user-data-file myscript my-ami" it only runs the new instance but doesn't execute the script myscript contains: #!/bin/bash echo "hello" ~/output.txt I'm running ubuntu server 12.04 AMIs. the target AMIs are duplicates of the initiating instance. if I run curl http:// 169.254.169.254/latest/user-data the imported script is there.

    Read the article

  • Where do deleted items go on the hard drive ?

    - by Jerry
    After reading the quote below on the Casey Anthony trial (CNN) ,I am curious about where deleted files actually go on a hard drive, how they can be seen after being deleted, and to what extent the data can be recovered (fully, partially, etc). "Earlier in the trial, experts testified that someone conducted the keyword searches on a desktop computer in the home Casey Anthony shared with her parents. The searches were found in a portion of the computer's hard drive that indicated they had been deleted, Detective Sandra Osborne of the Orange County Sheriff's Office testified Wednesday in Anthony's capital murder trial." I know some of the questions here on SO address third party software that can used for this kind of thing, but I'm more interested in how this data can be seen after deletion, where it resides on the hard drive, etc. I find the whole topic intriguing, so any additional insight is welcome.

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >