Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 229/356 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Multiple storages using localStorage

    - by Blondie
    Hey, Is it possible that the same name used can have many different values stored separately and to be shown in a list? e.g. function save() { var inputfield = document.getElementById('field').innerHTML; localStorage['justified'] = inputfield; } <input type="text" id="field" onclick="save();" /> Everytime someone enters something in the input field, and click on save, the localstorage will only save the value in the same name, however, does this conflict the way storage are saved, like being replaced with the latest input value? Also, is there anyway to prevent clearing the localstorage when clearing the cache? Thanks.

    Read the article

  • Any strategies for assessing the trade-off between CPU loss and memory gain from compression of data

    - by indiehacker
    Are very large TextProperties a burden? Should they be compressed? Say I have a information stored in 2 attributes of type TextProperty in my datastore entities. The strings are always the same length of 65,000 characters and have lots of repeating integers, a sample appearing as follows: entity.pixel_idx = 0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,5,5,5,5,5,5,5,5,5,5,5,5....etc. entity.pixel_color = 2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,...etc. So these above could also be represented using much less storage memory by compressing say using only each integer and the length of its series ( '0,8' for '0,0,0,0,0,0,0,0') but then its takes time and CPU to compress and decompress? Any general ideas? Are there some tricks for testing different attempts to the problem?

    Read the article

  • i need to use string variable in the Proc in sql server database 2005

    - by bassam
    I have this procedure CREATE Proc [dbo].Salse_Ditail -- Add the parameters for the stored procedure here @Report_Form varchar(1) , @DateFrom datetime , @DateTo datetime , @COMPANYID varchar(3), @All varchar(1) , @All1 varchar(1) , @All2 varchar(1) , @All3 varchar(1) , @All4 varchar(1) , @All5 varchar(1) , @Sector varchar(10), @Report_Parameter nvarchar(max) as BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. DECLARE @STRWhere nvarchar(max) IF @All5=0 AND @All4=0 AND @All3=0 AND @All2=0 AND @All1=0 and @All=1 set @STRWhere= N'and Sector_id = @Sector' if @Report_Form =1 or @Report_Form =3 or @Report_Form =4 SELECT RETURNREASONCODEID, SITE,SITE_NAME,Factory_id,Factory_Name,Sector_id,sector_name,Customer_name, Customer_id,ITEMID,ITEMNAME,SALESMANID,SALESMAN_NAME,Net_Qty,Net_Salse,Gross_Sales,Gross_Qty, NETWEIGHT_Gross,NETWEIGHT_salse_Gross,NETWEIGHT_NET,NETWEIGHT_salse_NET,Return_Sales,Free_Good, CollectionAmount FROM hal_bas_new_rep WHERE DATAAREAID =@COMPANYID AND INVOICEDATE >= @DateFrom AND INVOICEDATE <= @DateTo and Report_Activti = @Report_Form if @Report_Form =2 SELECT RETURNREASONCODEID , RETURNREASONDESC, SITE , SITE_NAME , Factory_id , Factory_Name , Sector_id , sector_name , Customer_name , Customer_id , ITEMID , ITEMNAME , SALESMANID , SALESMAN_NAME , Return_Sales FROM dbo.hal_bas_new_rep WHERE DATAAREAID =@COMPANYID AND INVOICEDATE >= @DateFrom AND INVOICEDATE <= @DateTo and Report_Activti = @Report_Form and RETURNREASONCODEID in ( SELECT Val FROM dbo.fn_String_To_Table(@Report_Parameter,',',1) ) /* @STRWhere // question: how can I use the variable here? */ end GO As you see I'm constructing a condition for the WHERE clause in a variable, but I don't know how to use it.

    Read the article

  • Manipulating the case of the build macros in Visual Studio: $(TargetDir)

    - by miked
    I've come across a weird problem today in Visual Studio 2005. If I create a new configuration "NewConfiguration" for one of my projects. The output directory referred to by the project in $(TargetDir) is "/path/to/build/area/newconfiguration". Note the loss of capital letters, I'd expect it to be in "/path/to/build/area/NewConfiguration". In another project that I created yesterday, the capital letters are there. Normally this would be a problem, but it's part of a fairly complicated build system where some of it's on unix and we need to worry about case sensitive filename. Does anyone know where the source string for the Visual Studio macros like $(TargetDir) are stored so that I can get the case to be consistent and match what we need for our build system?

    Read the article

  • Storing datetime in database?

    - by Curtis White
    I'm working on a blog and want to show my posts in eastern time zone. i figured that storing everything UTC would be the proper way. This creates a few challenges though: I have to convert all times from UTC to Eastern. This is not a biggie but adds a lot of code. And the "biggie" is that I use a short-date time to reference the posts by passing in a query, ala blogger. The problem is that there is no way to convert the short date time to the proper UTC date because I'm lacking the posted time info. Hmm, any problem to just storing all dates in eastern time? This would certainly make it easier for the rest of the application but if I needed to change time zones everything would be stored wrong.

    Read the article

  • Interacting With Class Objects in Ruby

    - by michaelmichael
    How can I interact with objects I've created based on their given attributes in Ruby? To give some context, I'm parsing a text file that might have several hundred entries like the following: ASIN: B00137RNIQ -------------------------Status Info------------------------- Upload created: 2010-04-09 09:33:45 Upload state: Imported Upload state id: 3 I can parse the above with regular expressions and use the data to create new objects in a "Product" class: class Product attr_reader :asin, :creation_date, :upload_state, :upload_state_id def initialize(asin, creation_date, upload_state, upload_state_id) @asin = asin @creation_date = creation_date @upload_state = upload_state @upload_state_id = upload_state_id end end After parsing, the raw text from above will be stored in an object that look like this: [#<Product:0x00000101006ef8 @asin="B00137RNIQ", @creation_date="2010-04-09 09:33:45 ", @upload_state="Imported ", @upload_state_id="3">] How can I then interact with the newly created class objects? For example, how might I pull all the creation dates for objects with an upload_state_id of 3? I get the feeling I'm going to have to write class methods, but I'm a bit stuck on where to start.

    Read the article

  • Perl, deleting an .xml file created and open with IO::File and XML::Writer?

    - by Sho Minamimoto
    So I'm running through a list of things and have code that creates an .xml file with IO::File called $doc, then I make a new writer with XML::Writer(OUTPUT = $doc). More code runs and I build a big xml file with XML::Writer. Then, near the end of the file, I find out if I need this file at all. If I do need it, I just $writer-end(); $doc-close(); but if I don't need it, what should I enter to just delete all data I've stored/saved and move onto the next file? I tried unlink($docpath) (before and after $doc-close()), the file was not deleted.

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Environment

    - by Johannes
    I want to program a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both engines will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • How to Import Excel Data into Silverlight App for Visualization?

    - by Ulf
    Hi there, Im building an Silverlight Application (Silverlight 4, Visual Studio 2010), in which the user can generate Charts (line-Charts, Bar Chart) dynamically, by entering a specific time period. At the Moment i have no idea how to import the data to Silverlight, to generate the Charts. My data is stored in 4 Excel Tables and i have no clue what would be the best way to get that data into Silverlight? I read a lot of examples using SQL Server as Database, but unfortunatly SQL Server is no choice for me. Any help would be great!

    Read the article

  • What languages, frameworks, and technologies have you used to implement document searching?

    - by Bill Brasky
    I am at a new company and one of our goals is to implement a document search portal for our team and our clients. I am a bit worried that if we use an external service provider like Salesforce or some other ECM in the cloud there will be a lot of integration work in the future. From a client perspective, these documents will also exist in the same bucket as our structured content (stored in the DB, not a MS Word doc). If you have implemented document searching, what languages, frameworks, and technologies have you used? Do you have any failure stories? I don't have a problem using something out of the box, but I think it is important that we have control over the documents and the API to access them. I would like to use Rails if we go fully custom.

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • mysql query that has array

    - by Xainee Khan
    //get all id's of ur friend that has installed your application $friend_pics=$facebook->api( array( 'method' => 'fql.query', 'query' => "SELECT uid FROM user WHERE uid IN(SELECT uid2 from friend WHERE uid1='$user') AND is_app_user = 1" ) ); // this query work fine //your top10 friends in app $result="SELECT * FROM fb_user WHERE user_id IN($friend_pics) ORDER BY oldscore DESC LIMIT 0,10"; db_execute($result); i want to retrive ten top scorer from my database stored in oldscore but in my second query the array name $friend_pics is not working i guess,plz help me thanks

    Read the article

  • MySQL Cluster data nodes - slow SELECTs

    - by Boyan Georgiev
    Hi to all. First off, I'm new to MySQL Cluster. This is my pain: I've managed to setup a MySQL Cluster with two data nodes, two SQL nodes and one management server. Everything works pretty well, except the following: my data nodes are spread across an intranet link which incurs latency into communications between the data nodes. Apparently, due to MySQL Cluster's internal partitioning schemes, when my PHP application pulls data from the cluster via SELECT queries, parts of the data are pulled from both data nodes. This makes the page appear onscreen REALLY slowly. If I bring one data node offline, the data can only be pulled from that single remaining data node, and thus, the final result (HTML output) appears on the screen in a very timely fashion. So, my question is this: can the data nodes/cluster be told to pull data from partitions stored only on a particular data node?

    Read the article

  • Get command object

    - by user198147
    Hi all, I am writing a spring 2.5 application and in my jsp I'm writing my own tags. It's about a list of objects...when I change the number of rows that list shows(a combobox), I am doing a submit on my form returning back to the view(obviosly with the new number of rows returned). When listing with my own tags I need to get the properties from my command object. I have access to the pageContext object but I can't figure where the command object is stored. Could someoane please help me? Thanks in advance, Luisa

    Read the article

  • Decimal rounding strategies in enterprise applications

    - by Sapphire
    Well, I am wondering about a thing with rounding decimals, and storing them in DB. Problem is like this: Let's say we have a customer and a invoice. The invoice has total price of $100.495 (due to some discount percentage which is not integer number), but it is shown as $100.50 (when rounded, just for print on invoice). It is stored in the DB with the price of $100.495, which means that when customer makes a deposit of $100.50 it will have $0.005 extra on the account. If this is rounded, it will appear as $0, but after couple of invoices it would keep accumulating, which would appear wrong (although it actually is not). What is best to do in this case. Store the value of $100.50, or leave everything as-is?

    Read the article

  • Why should I use an N-Tier Approach When using an SqlDatasource is ALOT EASIER ?

    - by The_AlienCoder
    When it comes to web development I have always tried to work SMART not HARD. So for along time My Aproach to interacting with databases in my AspNet projects has been this : 1) Create my stored procedures 2) Drag an SQLDatasource control on my aspx page 3) Bind a DataList Control to my SQLDatasource 4) Insert, Update & Delete by using my Datalist or programmatically using built in SQLDatasource methods e.g MySqlDataSource.InsertParameters["author"].DefaultValue = TextBox1.Text; MySqlDataSource.Insert(); Recently however I got a relatively easy web project. So I decided to employ a 3-tier Model...But I got exhausted halfway and just didnt seem worth it ! It seemed like I was working too HARD for a project that could have been easily accomplished by a couple of SqlDataSource Controls. So Why Is the N-Tier Model better than my Approach? Has it anything to do with performance? What are the advantages of the ObjectDataSource control over the SqlDataSource Control?

    Read the article

  • What are some "mental steps" a developer must take to begin moving from SQL to NO-SQL (CouchDB, Fath

    - by Byron Sommardahl
    I have my mind firmly wrapped around relational databases and how to code efficiently against them. Most of my experience is with MySQL and SQL. I like many of the things I'm hearing about document-based databases, especially when someone in a recent podcast mentioned huge performance benefits. So, if I'm going to go down that road, what are some of the mental steps I must take to shift from SQL to NO-SQL? If it makes any difference in your answer, I'm a C# developer primarily (today, anyhow). I'm used to ORM's like EF and Linq to SQL. Before ORMs, I rolled my own objects with generics and datareaders. Maybe that matters, maybe it doesn't. Here are some more specific: How do I need to think about joins? How will I query without a SELECT statement? What happens to my existing stored objects when I add a property in my code? (feel free to add questions of your own here)

    Read the article

  • string manipulations in C

    - by Vivek27
    Following are some basic questions that I have with respect to strings in C. If string literals are stored in read-only data segment and cannot be changed after initialisation, then what is the difference between the following two initialisations. char *string = "Hello world"; const char *string = "Hello world"; When we dynamically allocate memory for strings, I see the following allocation is capable enough to hold a string of arbitary length.Though this allocation work, I undersand/beleive that it is always good practice to allocate the actual size of actual string rather than the size of data type.Please guide on proper usage of dynamic allocation for strings. char *string = (char *)malloc(sizeof(char));

    Read the article

  • Best way for saving infinit playlists (arrays) into db? (php mySql)

    - by Ole Jak
    So client gives me a string like "1,23,23,abc,ggg,544,tf4," from user 12 . There can be infinit number of elements with no spaces just value,value,... structure. I have users table (with users uId(key), names etc). I have streams table with ( sId(key), externalID, etc values). User sends me externalId's. And I need to hawe externalId's in play list (not my sId's). I need some way to store such array into my DB and be able to get it from DB. I need to be able to do 2 things return such string back to user be able to get na array from it like {1; 23; 23; abc; ggg; 544; tf4;} So what is best method (best here means shourt(small amount of) code) to store such data into db to retrivew stored tata in bouth ways shown

    Read the article

  • Should I uniform different API calls to a single format before caching?

    - by bluedaniel
    The problem is that I am recreating the social media widget/icons on http://about.me/bluedaniel (thats me). Anyhow there can be up to 6 or 7 different API calls on the page and obviously I am caching them, at the moment with Memcached. The question is, as they arrive in various formats and sizes (fb-json, linkedin-xml, wordpress-rss etc), should I universally format/convert them before storing it in the cache. At present I have recreated the html widget and then stored that, but I worry about saving huge blocks of html in the cache as it doesn't seem that smart.

    Read the article

  • Android and PHP - Do I need to use sessions?

    - by jtnire
    I have created an Android App that communicates with a PHP web server. They both send JSON to each other. My App is almost finished, however there is one thing left to do: authentication. Since the user's username and password will be stored in Android SharedPreferences, is there any need to use PHP sessions, given that the user won't need to enter the username/password at every request? Since I can just send the username and password in the HTTP POST header for every request, and that I will be using SSL, is this sufficient? I guess I could add an extra field in the header called 'random' that just adds a random value, just to use as a salt so that the encrypted SSL payload will be different everytime. The reason why I don't want to use sessions is that my Android App would either have to handle cookies, or managed the storage of the session ID. If there are some serious cons to using my method above, then I'm more than happy to use sessions, however all advice is appreciated. Thanks

    Read the article

  • How to see functions implemented in R packages

    - by rinzy kutex
    How do I see the actual code for the functions implemented in R packaged. For instance I have loaded the caret package and I would like to see the code used for the function "createMultiFolds". I have gone to where the packages are stored but can't seem to find what I am looking for. For instance folders like Meta in the root folder of the caret package have files with the .rds extension which I can read into R with the gzfile command but I can't still seem to find what I am looking for i.e. the exact for the functions e.g. creatMultifolds. I would like to adapt some of these functions specifically to my research needs by make small changes to the codes to output the results in the way that I want without having to "re-invent the wheel" I will be glad if someone can help me. Thanks.

    Read the article

  • Argc/Argv C Problems

    - by Salman
    Hey all, If I have the following code: main(int argc, char *argv[]){ char serveradd[20]; strcpy(serveradd, argv[1]); int port = atoi(argv[2]); printf("%s %d \n", serveradd, port); The first two arguments to the command line are printed. However, if I do this: char serveradd[20]; strcpy(serveradd, argv[1]); int port = atoi(argv[2]); char versionnum[1]; strcpy(versionnum, argv[3]); printf("%s %d %s \n", serveradd, port, versionnum);` The first argument (serveradd) does not print out to the screen and is not being stored... Why is this happening and how can I fix it? Thanks!

    Read the article

  • Comparing two xml files

    - by Ragini
    I have two large xml files. Almost 1.4 mb each. I want to compare them and see the differing part. I am using linux. Is there any free tool which can do this for me ? Or any other technique ? I used "diff" command in linux and tried to output the result in another file. (diff file1.xml file2.xml result.xml) But the resulted file showed "Could not parse the xml". However it showed something on screen. I would like the differ part to be stored somewhere if possible. (or atleast I should be able to see it properly) Thanks Ragini

    Read the article

  • Is there a work-around that allows missing data to equal NULL for LOAD DATA INFILE in MySQL?

    - by richardh
    I have a lot of large csv files with NULL values stored as ,, (i.e., no entry). After a lot of searching I found that this is a known "bug", although it may be a feature for some users. Is there a way that I can fix this on the fly without pre-processing? These data are all numeric, so a zero value is very different from NULL. Or if I have to do pre-processing, is there one that is most promising for dealing with tens of csv files of 100mb to 1gb? Thanks!

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >