Search Results

Search found 94339 results on 3774 pages for 'system data'.

Page 635/3774 | < Previous Page | 631 632 633 634 635 636 637 638 639 640 641 642  | Next Page >

  • How can I use EF in windows forms to insert new data ?

    - by Al0NE
    Hi Maybe it's simple question, but it's my first time with EF + win app so ... I'm building win app with EF by adding edmx as data source then drag and drop the tables to get navigator and binding source .. when I press the add button in the navigator it allows me to enter new data but when I save the context I get NULL data for all fields except the auto increment field ( the ID field ).... What should I do to save many entries ...? DO I have to loop over all the entities in the binding source and add them to the context ?? Thanx in advance Update: I forgot to say that when i use data grid view it adds the items correctly but with "Details" it didn't ..

    Read the article

  • Flex Tree with infinite parents and children

    - by Tempname
    I am working on a tree component and I am having a bit of the issue with populating the data-provider for this tree. The data that I get back from my database is a simple array of value objects. Each value object has 2 properties. ObjectID and ParentID. For parents the ParentID is null and for children the ParentID is the ObjectID of the parent. Any help with this is greatly appreciated. Essentially the tree should look something like this: Parent1 Child1 Child1 Child2 Child1 Child2 Parent2 Child1 Child2 Child3 Child1 This is the current code that I am testing with: public function setDataProvider(data:Array):void { var tree:Array = new Array(); for(var i:Number = 0; i < data.length; i++) { // do the top level array if(!data[i].parentID) { tree.push(data[i], getChildren(data[i].objectID, data)); } } function getChildren(objectID:Number, data:Array):Array { var childArr:Array = new Array(); for(var k:Number = 0; k < data.length; k++) { if(data[k].parentID == objectID) { childArr.push(data[k]); //getChildren(data[k].objectID, data); } } return childArr; } trace(ObjectUtil.toString(tree)); } Here is a cross section of my data: ObjectID ParentID 1 NULL 10 NULL 8 NULL 6 NULL 4 6 3 6 9 6 2 6 11 7 7 8 5 8

    Read the article

  • Does Compressed Sensing bring anything new to data Compression?

    - by anon
    Compressed sensing is great for situations where capturing data is expensive (either in energy or time), and thus less samples can now be taken. However, in situations like image compression, given that the data is already on the computer -- does compressed sensing offer anything? For example, would it offer better data compression? Would it result in better image search?... (Note: If you don't know what the field of Compressed Sensing is, please do not respond.)

    Read the article

  • Is it possible to use template and value at the same time on data-bind?

    - by Anonymous
    I have two sections of code. Code #1: <select data-bind="options: operatingSystems, optionsText: function (item) { return item.Name }, value: selectedOperatingSystem"></select> Code #2: <script type="text/html" id="os-template-detail"> <option data-bind="text: Name" class="body-text"></option> </script> <select data-bind="value: selectedOperatingSystem, template: { name: 'os-template-detail', foreach: operatingSystems }"></select> Both shows data from json correctly. With code #1, it updates the value when I select an item on the list while code #2 does not update anything when I change the item. I am pretty new to Knockout.js and have no idea why Code #2 doesn't work. Is it the limitation of Knockout that preventing me from using template and value at the same time?

    Read the article

  • Breaking change October: Status waiting when create custom audience with add user data file?

    - by THACH LN
    In my system, I create a custom audience - data file - mobile advertiser ids with 10k user. After I created it, I refreshed API link to get information about the audience which I've just created: https://graph.facebook.com/act_xxx/customaudiences?fields=id,account_id,name,lookalike_spec,retention_days,subtype,approximate_count,rule,delivery_status,operation_status,data_source,permission_for_actions&limit=500&access_token=xxx. (After breaking change in October 2014, status field not return. So I see operation_status) I see that: "operation_status": { "code": 410, "description": "No file has been uploaded for this audience, or the previous upload has failed due to system error. Please try uploading the file again." } But after about 10s, I refresh again, and operation_status change to: "operation_status": { "code": 200, "description": "Normal" } My quetions is: how to check this audience's status is waiting ( uploading file) , like when I create on facebook not my our system. Thanks for all anwser.

    Read the article

  • When encrypting data that is not an even multiple of the block size do I have to send a complete las

    - by WilliamKF
    If I am using a block cipher such as AES which has a block size of 128 bits, what do I do if my data is not an even multiple of 128 bits? I am working with packets of data and do not want to change the size of my packet when encrypting it, yet my data is not an even multiple of 128? Does the AES block cipher allow handling of a final block that is short without changing the size of my message once encrypted?

    Read the article

  • System.Net.Mail.SmtpClient cannot authenticate against a POP3 server, right?

    - by Herchu
    One of our customer seems to have a very old email system, those that ask you to authenticate to the POP3 server before allowing you to send messages through the SMTP server. Regrettably, we have to believe in what our customer tell us for we cannot access their facilities. But as far as I remember, years ago there were mail systems that once you log into the POP3, the STMP server is kept open for a few minutes for the client IP. Our application sends messages by using System.Net.Mail.SmtpClient which seems to be unable to authenticate to those kinds of servers. Is that correct? If so, what would be the simplest workaround? I was thinking of a minimal POP3 implementation (just the login part of the protocol). Would that work? Thanks in advance.

    Read the article

  • How to catch any exception (System.Exception) without a warning in F#?

    - by LLS
    I tried to catch an Exception but the compiler gives warning: This type test or downcast will always hold let testFail () = try printfn "Ready for failing..." failwith "Fails" with | :? System.ArgumentException -> () | :? System.Exception -> () The question is: how to I do it without the warning? (I believe there must be a way to do this, otherwise there should be no warning) Like C# try { Console.WriteLine("Ready for failing..."); throw new Exception("Fails"); } catch (Exception) { }

    Read the article

  • How could I use AJAX to create a Json data source .txt file?

    - by Adam
    I'm creating a form that collects standard information about customers. When the user hits save, I would like to create a .txt file that would be used to later retrieve all of the data collected from customers. I'm using DataTables which is a jQuery plugin to display the data. The .txt file would be formatted to be saved as such: { "aaData": [ ["client 1 name","address","city","state","zip"], ["client 2 name","address","city","state","zip"], ["client 3 name","address","city","state","zip"], ... ["client x name","address","city","state","zip"] ] } Where "aaData": is used by DataTables. This is going to part of an iPhone app, so the data source has to be very small and not reliant on a constant connection to a server, so, essentially, a client-side data source. The .txt file has to also be updated when edited and saved, and then replaced every time it is downloaded.

    Read the article

  • Best way to track impressions/clicks in a bespoke advertisement system?

    - by Martin Bean
    I've been asked to create a bespoke advertisement system despite suggesting open source alternatives such as OpenX and DoubleClick for Publishers (the former Google Ad Manager). I've got the basics of the system set up, i.e. uploading creatives, creating positions and a mechanism to place creatives within positions; however, the area I'm stuck with is impression and click tracking. At the moment an impression and click is stored with the creative, but this then means impressions/clicks can't be queried. For example, we can't find how many impressions were in position x between date y and date z. How would I go about storing that kind of data? My theory was store the creative ID, position ID and timestamp in a database table, but given the amount of traffic the site has this would produce a very large database very quickly. If any one could give me a pointer or two, that would be great.

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • How much data can a javascript global variable hold??

    - by alex
    Hi, To enable a go back function with an ajax div i have create these simple functions and i was wondering how much data a .js global variable can hold?? var dataAfterSearch; //global variable which holds our search results function goBackAfterSearch() { /** * function which displays the previous state * **/ $.ajaxSetup ({ cache: false }); //alert("Previous Search" +dataAfterSearch); $('#result').html(dataAfterSearch); paginateIt(); } function setDataAfterSearch(data) { /** * function to set the global dataAfterSearch * **/ dataAfterSearch = data; } kind regards

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • How can I get an Active Directory data code from System.DirectoryServices[.Protocols]?

    - by Alex Waddell
    When using .Protocols, I can run the following pseudocode to authenticate to an AD: try { LdapConnection c = new LdapConnection("User", "Password"); c.Bind(); } catch (LdapException le) { Debug.WriteLine(le.ResultCode); } This code will allow me to get the "Invalid Credentials" error string, and the AD code "49", but I need to get the additional data errors similar to an LDAP Java client : [LDAP: error code 49 - 80090308: LdapErr: DSID-0C09030F, comment: AcceptSecurityContext error, data **525**, vece ] 525 – user not found 52e – invalid credentials (bad password) 530 – logon time restriction 532 – password expired 533 – account disabled 701 – account expired 773 – user must reset password

    Read the article

  • How to have an excel addin read rows from a worksheet until no more data?

    - by user169867
    I've started writing a Com addin for Excel 2003 using C#. I'm looking for a code example showing how to read in cell data from the active worksheet. I've seen that you can write code like this: Excel.Range firstCell = ws.get_Range("A1", Type.Missing); Excel.Range lastCell = ws.get_Range("A10", Type.Missing); Excel.Range worksheetCells = ws.get_Range(firstCell, lastCell); to grab a range of cells. What I could use help with is how to read the cell data when you don't know how many rows of data there are. I may be able to determine the starting row that the data will be begin at, but there will be an unkown number of rows of data to read. Could someone provide me w/ an example of how to read rows from the worksheet until you come across a row of empty cells? Also does anyone know how to grab the range of cells the user has selected? Any help would be greatly appreciated. This seems like a powerful dev tool, but I'm having trouble finding detailed documentation to help me learn it :)

    Read the article

  • Sql Server Compact Edition 3.5: Does the data persist in the database file?

    - by jerbersoft
    Hi guys, I have this question in which I have a SQL Server Compact Edition database for a desktop application. I am completely new to Sql Server Compact Edition. The question is, does the data inserted in the database persist even if the application is shut down or restarted? Coz I cant seem to find my data when using Sql Server Management Studio to manage the database. Am I missing anything/something? EDIT: Is SQL Server Compact Edition used for caching local data only? We cant use it like what we normally do on Sql Server Express for example managing data using Sql Server Management Studio?

    Read the article

  • import a text file into a temporary table using 'Load data infile' in a stored procedure- MySQL

    - by Pankaj
    I need to import a text file into a temporary table and from that select portions of it to insert in different tables. I wanted to use 'LOAD DATA INFILE'. Is there any way, i can use 'Load data infile' in a stored procedure. I am using mysql. LOAD DATA LOCAL INFILE 'C:\\MyData.txt' INTO TABLE tempprod fields terminated by ',' lines terminated by '\r\n'; SELECT * FROM product p;

    Read the article

< Previous Page | 631 632 633 634 635 636 637 638 639 640 641 642  | Next Page >