Search Results

Search found 58653 results on 2347 pages for 'seed data'.

Page 677/2347 | < Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >

  • Database doesn't update using TransactionScope

    - by Dissonant
    I have a client trying to communicate with a WCF service in a transactional manner. The client passes some data to the service and the service adds the data to its database accordingly. For some reason, the new data the service submits to its database isn't being persisted. When I have a look at the table data in the Server Explorer no new rows are added... Relevant code snippets are below: Client static void Main() { MyServiceClient client = new MyServiceClient(); Console.WriteLine("Please enter your name:"); string name = Console.ReadLine(); Console.WriteLine("Please enter the amount:"); int amount = int.Parse(Console.ReadLine()); using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required)) { client.SubmitData(amount, name); transaction.Complete(); } client.Close(); } Service Note: I'm using Entity Framework to persist objects to the database. [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)] public void SubmitData(int amount, string name) { DatabaseEntities db = new DatabaseEntities(); Payment payment = new Payment(); payment.Amount = amount; payment.Name = name; db.AddToPayment(payment); //add to Payment table db.SaveChanges(); db.Dispose(); } I'm guessing it has something to do with the TransactionScope being used in the client. I've tried all combinations of db.SaveChanges() and db.AcceptAllChanges() as well, but the new payment data just doesn't get added to the database!

    Read the article

  • Getting Hprof dump for other processes from application code

    - by Natarajan
    Hi, In my application , i have an option to capture the hprof dump. I used android.os.Debug.dumpHprofData (String fileName) Initially i though the hprof data generated by the method above is for the entire device , which is not so . The hprof data generated is only for my process. Now i am trying to generate hprof data for other process as well. I need to get the Hprof dump for all the running processes from application code. from adb shell i tried "kill -10 " , This command will generate the hprof file for the corresponding process in the data/misc folder. Now the problem is this command is working perfectly from the adb shell prompt , but i am not able to embed the command to mycode. My code is like Runtime.getRuntime().exec("chmod 777 /data/misc") Runtime.getRunTime().exec("kill -10 ") No exceptions are thrown , but somehow it is not working. The same code above is capturing Hprof dump for my process, when i give my process ID. I tried with "android.os.Process.sendSignal (int pid, android.os.Process.SIGNAL_USR1) ;" also.Getting the same problem.It is capturing Hprof dump for my process. For other processes it is not working. Do we need to have any special permission to kill other process from our process ? Or is it a built issue ? can you please suggest some possible way to get Hprof dump for other processes from application code? Thanks

    Read the article

  • Cryptography: best practices for keys in memory?

    - by Johan
    Background: I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory. So, in order to do this, the app is required to have the key stored in memory. The question is, is there any good best practices for this? Securing the key in memory. A few ideas: Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2)?) Splitting the key over multiple memory locations. Encrypting the key. With what, and how to keep the...key key.. secure? Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too) Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage; The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices? Thanks for any input!

    Read the article

  • Split comma separated string to count duplicates

    - by josepv
    I have the following data in my database (comma separated strings): "word, test, hello" "test, lorem, word" "test" ... etc How can I transform this data into a Dictionary whereby each string is separated into each distinct word together with the number of times that it occurs, i.e. {"test", 3}, {"word", 2}, {"hello", 1}, {"lorem", 1} I will have approximately 3000 rows of data in case this makes a difference to any solution offered. Also I am using .NET 3.5 (and would be interested to see any solution using linq)

    Read the article

  • What is the fastest method to create a new database from a template ?

    - by Locksfree
    We are creating databases on demand and the databases can be created from different templates. All templates have the same structure but different data. The data contained by the templates is small. What is the fastest way to create a copy of the database: Backup/Restore Using T-SQL ? Using SMO ? Create a new database from a scripted version of the template and then fill in the little data required ? Other ?

    Read the article

  • How can I create an fscanf format string to accept white space and comma (,) tokenization

    - by Jamie
    I've got some analysis code (myprog) that sucks in data using the following: if(5 == fscanf(in, "%s%lf%f%f%f", tag, & sec, & tgt, & s1, & s2)) which works just fine. But in the situation where I've got data files that are separated by commas, I'm currently doing something like: sed 's/,/ /g' data | myprog Can I modify the format string in the fscanf() function to accept both delimitation formats?

    Read the article

  • whats faster, more efficient, loading a js file with arrays or populating arrays from tables

    - by Leigh
    I am rebuilding an ecom site where the product data is stored in a multidimensional JS array that gets loaded on page load. This data is constantly being accessed with JS due to the nature of the site, to update prices based on user selections. There are many options that affect final price. From a programming standpoint, a DB table is much easier to maintain and update than are JS arrays, and since I am porting the site over to PHP and MYSQL, I have been considering moving these arrays into tables. So, would it be better to populate an array from the DB on load so that the pricing data is always available to the JS, or stay with hard coded JS files? I considered getting data via ajax as needed, but since this site has to constantly update pricing with user interaction, I have pretty much ruled that out. How would you handle it?

    Read the article

  • Reading a Delphi binary file in Python

    - by Brendan
    I have a file that was written with the following Delphi declaration ... Type Tfulldata = Record dpoints, dloops : integer; dtime, bT, sT, hI, LI : real; tm : real; data : array[1..armax] Of Real; End; ... Var: fh: File Of Tfulldata; I want to analyse the data in the files (many MB in size) using Python if possible - is there an easy way to read in the data and cast the data into Python objects similar in form to the Delphi records? Does anyone know of a library perhaps that does this?

    Read the article

  • Submit form using javascript, work in FF but not in IE

    - by Permana
    I have this code. The code below is working in Firefox, but it is not in IE <body> <?php $data = getLoginData($_SESSION['whoyouare']); ?> <form name="frm_redirect_dfr" action="<?php echo $data['url']; ?>" method="POST" id="frm_redirect_dfr" style="display: none;"> <input name="DFRNet_User" value="<?php echo $data['username']; ?>" type="hidden" /> <input name="DFRNet_Pass" value="<?php echo $data['password']; ?>" type="hidden" /> <input name="tbllogin" value="login" type="hidden" /> <input type="submit" value="submit" /> </form> <script language="javascript" type="text/javascript"> document.forms["frm_redirect_dfr"].submit(); </script> </body> What I want to do is, when user access the page, it first will try to get login data, echo it in the form, and submit the form automatically using javascript

    Read the article

  • How to save multiple column in database

    - by shamim
    I have a text file.I need to get data from this text file and show on grid , 1)After this user can update information from gridview, 2)Click on save button save data on database. Before clicking on button data don't save on database.How to do that?

    Read the article

  • How do I convert an AMD module from a singleton to an instance?

    - by Jamie Ide
    I'm trying to convert a working Durandal view model module from a singleton to an instance. The original working version followed this pattern: define(['knockout'], function(ko) { var vm = { activate: activate, companyId: null; company: ko.observable({}) }; return vm; function activate(companyId) { vm.companyId = companyId; //get company data then vm.company(data); } } The new version exports a function so that I get a new instance on every request... define(['knockout'], function(ko) { var vm = function() { activate = activate; companyId = null; company = ko.observable({}); }; return vm; function activate(companyId) { vm.companyId = companyId; //get company data then vm.company(data); } } The error I'm getting is "object function () [...function signature...] has no method company on the line vm.company(data);. What am I doing wrong? Why can I set the property but can't access the knockout observable? How should I refactor the original code so that I get a new instance on every request? My efforts to simplify the code for this question hid the actual problem. My real code was using Q promises and calling two methods with Q.All. Since Q is in the global namespace, it couldn't resolve my viewmodel after converting to a function. Passing the view model to the methods called by Q resolved the problem.

    Read the article

  • Replacing a colour/colours in a movieclip with different colours?

    - by Oli
    I am trying to take a movieclip of a character and change the colour of their clothes. The character is comprised of vectors. So far I have semi-sucessfully used this method: stop the movieclip take the bitmap data from the current frame use threshold to replace the colour store the resulting bitmap data in an array add an onenterframe function - clear the current frame and add the bitmap data from the processed data in the array So - this works pretty well. Each frame is only processed once at the beginning and then the write to the movieclip is very quick. However! As the replacement is being performed on a bitmap there is an amount of aliasing that takes place to remove jaggies/pixelation. This produces colours that are not matched using threshold. So the main colour is replaced correctly but it is surrounded by a halo of mixed colours :( I am sure there should be a better way to do this. Any ideas or answers would be greatly apreciated - Thanks.

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • Custom field type:problem in page layout

    - by AB
    I have created a custom field type and its working properly but when I am adding it on page layout ,its not displaying any data in there.I am getting data from one of the list.I have added that field in list and its displaying all the data I want to select but on page layout its blank..... Any idea where i m getting wrong...?

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • Integrating 3rd-party forum software to member-based website

    - by john
    When using some existing forum software in a larger web-site, how easy is it to: 1)Make your site's login functionality log the user into the forum 2)Make your site's registration functionality create forum login data I suppose in a way it might be easier to ONLY use the forum's database for maintaining users, but that means trusting it with sensitive data. I'm planning an integration between an existing bespoke desktop app and a new bespoke web-site which should include forums. I don't know which forums will be used but I know the new web functionality won't be PHP-based. I figure that's not a big deal but I'm wondering if forums typically allow configuration of where they look for login data, to avoid duplicating this data into my DB and the forum DB.

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • C# method generic params parameter bug?

    - by Mike M
    Hey, I appears to me as though there is a bug/inconsistency in the C# compiler. This works fine (first method gets called): public void SomeMethod(string message, object data); public void SomeMethod(string message, params object[] data); // .... SomeMethod("woohoo", item); Yet this causes "The call is ambiguous between the following methods" error: public void SomeMethod(string message, T data); public void SomeMethod(string message, params T[] data); // .... SomeMethod("woohoo", (T)item); I could just use the dump the first method entirely, but since this is a very performance sensitive library and the first method will be used about 75% of the time, I would rather not always wrap things in an array and instantiate an iterator to go over a foreach if there is only one item. Splitting into different named methods would be messy at best IMO. Thoughts?

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • jquery ui dialog in asp.net mvc3 doesn't open on second time

    - by giri
    when i click the New Trade button in the form it opens jquery ui dialog. but, i have link button in the gridview when i click the link button it should open jquery ui dialog, it opens jquery ui dialog before clicking the new trade button. but, after clicking the new trade button, if i click link button in the gridview it invoke "ViewTradeDialog(id)" function, the dialog doesn't open, it shows error message "$vwdia.html(data).dialog is not a function". my code follows: @using (Html.BeginForm("NewTrade", "Trade", FormMethod.Post, new { id = "searchForm" })) { <div id="searchbtn"> <input id="btn_newtrade" type="submit" value="New Trade" /> </div> } jquery code <script type="text/javascript"> $(function () { var $loading = $('<img src="../../loading.gif" alt="loading">'); var $dialog = $('<div></div>').append($loading); $('#searchForm').submit(function (e) { var url = this.action; $.ajax({ autoOpen: false, url: url, success: function (data) { $dialog.html(data).dialog({ zIndex:1, width: 1400, height: 600, resizable: false, title: 'New Trade Details', modal: true, buttons: { "close": function () { $dialog.dialog('close'); }, "Add Trade": function () { $dialog.dialog('close'); $.ajax({ type: 'POST', url: url }); } } }); } }); return false; }); }); function ViewTradeDialog(id) { alert(id); var $vwdia = $('<div></div>'); var url = '/Trade/ViewTrades?tradeid=' + id; $.ajax({ url: url, success: function (data) { $vwdia.html(data).dialog({ width: 600, height: 600, resizable: false, title: 'View Trade Details', modal: false, buttons: { "close": function () { $vwdia.dialog('close'); } } }); } }); return false; }

    Read the article

  • CSRF protection and cross site form access

    - by fl00r
    Hi. I aw working on cross site authentication (some domains have got common authentication). So I want to send authentication data (login, password) to main domain from others. How should I use protect_from_forgery and how can I check if data received from valid domain? What I am thinking now is to turn off protect_from_forgery for session controller and check domain name of received data. But maybe I can configure CSRF protection for not only one domain?

    Read the article

< Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >