Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 1008/2386 | < Previous Page | 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015  | Next Page >

  • Not getting key value from Identity column back after inserting new row with SubSonic ActiveRecord

    - by mikedevenney
    I'm sure I'm missing the obvious answer here, but could use a hand. I'm new to SubSonic and using version 3. I've got myself to the point of being able to query and insert, but I'm stuck with how I would get the value of the identity column back after my insert. I saw another post that mentioned Linq Templates. I'm not using those (at least I don't think I am...?) TIA ... UPDATE ... So I've been debugging through my code watching how the SubSonic code works and I found where the indentity column is being ignored. I use int as the datatype for my ID columns in the database and set them as identity. Since int is a non-nullable data type in c# the logical test in the Add method (public void Add(IDataProvider provider)) that checks if there is a value in the key column by doing a (key==null) could be the issue. The code that gets the new value for the identity field is in the 'true path', since an int can't be null and I use ints as my identity column data types this test will never pass. The ID field for my object has a 0 in it that I didn't put there. I assume it's set during the initialization of the object. Am I off base here? Is the answer to change my data types in the database? Another question (more a curiosity). I noticed that some of the properties in the generated classes are declared with a ? after the datatype. I'm not familiar with this declaration construct... what gives? There are some declared as an int (non key fields) and others that are declared as int? (key fields). Does this have something to do with how they're treated at initialization? Any help is appreciated! --BUMP--

    Read the article

  • is there something equivalent to 'Address of' or offset operator in .net?

    - by Gio
    We have nested stuctures as such, used as an interface for some device drivers. On occasion we have to update individual elements. An 'address of' operator would be helpful, but an 'offset' function or operator is what I'm really looking for, but not sure how to go about it. In other words, how far is structureN.elementX away from the start of the structure in bytes? [StructLayout(LayoutKind.Sequential)] public struct s1 { UInt16 elem1; UInt16 elem2; UInt16 elem3; } [StructLayout(LayoutKind.Sequential)] public struct s2 { UInt16 elem1; UInt16 elem2; UInt16 elem3; } [StructLayout(LayoutKind.Sequential)] public struct driver { public S1 s1; public S2 s2; } For instance we need to send the device driver some data to update driver.s1.elem3, by way of providing an offset address, data block and length. We would update our local copy, then call the device api with the afore mentioned data. Not sure I have to do this with 'unsafe' method calls. Any help?

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • ResXResourceWriter Chops off end of file

    - by Aaron Salazar
    I'm trying to create a .resx file from a dictionary of strings. Luckily the .Net framework has a ResXResourceWriter class. The only problem is that when I create the file using the code below, the last few lines of my generated resx file are missing. I checked my Dictionary and it contains all the string pairs that I expect. public void ExportToResx(Dictionary<string,string> resourceDictionary, string fileName) { var writer = new ResXResourceWriter(fileName); foreach (var resource in resourceDictionary) { writer.AddResource(resource.Key, resource.Value); } } Unfortunately, it is a little difficult to show the entire resx file since it has 2198 (should have 2222) lines but here is the last little bit: ... 2195 <data name="LangAlign_ReportIssue" xml:space="preserve"> 2196 <value>Report an Issue</value> 2197 </data> 2198 <data name="LangAlign_Return BTW, notice that the file cuts off right at the end of that "n" in "LangAlign_Return". The string should read "LangAlign_ReturnToWorkspace". The file should also end at line 2222.

    Read the article

  • convert SQL Server StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of SQL Server To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<>0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • C++ Templates: Convincing self against code bloat

    - by ArunSaha
    I have heard about code bloats in context of C++ templates. I know that is not the case with modern C++ compilers. But, I want to construct an example and convince myself. Lets say we have a class template< typename T, size_t N > class Array { public: T * data(); private: T elems_[ N }; }; template< typename T, size_t N > T * Array<T>::data() { return elems_; } Further, let's say types.h contains typedef Array< int, 100 > MyArray; x.cpp contains MyArray ArrayX; and y.cpp contains MyArray ArrayY; Now, how can I verify that the code space for MyArray::data() is same for both ArrayX and ArrayY? What else I should know and verify from this (or other similar simple) examples? If there is any g++ specific tips, I am interested for that too. PS: Regarding bloat, I am concerned even for the slightest of bloats, since I come from embedded context.

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • JQuery DataTables link item

    - by rogcg
    I'm trying to link the items from a specific column, but each one will be linked for a different id from the json string. Unfortunately I can't find a way to do this using the API (I know there is a lot of ways to do that without using the API ), but I'm looking for a way to link a item from a column (each one with a link for a specific id). So here is my code, I use getJSON to get the JSON from the server, and I load the data from this JSON to the table like this: $.getJSON("/method/from/server/", function(data) { var total = 0; $("#table_body").empty(); var oTable = $('#mytable').dataTable( { "sPaginationType" : "full_numbers", "aaSorting": [[ 0, "asc" ]] }); oTable.fnClearTable(); $.each(data, function(i, item) { oTable.fnAddData( [ item.contact_name, item.contact_email ] ); }); }); What I want to do, is for each row, link the contact_name to its id, which is also in the JSON, and can be accessed inside this $.each loop by using item.contact_id. Is there a way to do this using DataTables API, if yes, could you explain me and provide a good resource that will help me with this? Thanks.

    Read the article

  • PHP Pass Dynamic Array name to function

    - by Brad
    How do I pass an array key to a function to pull up the right key's data? // The array <?php $var['TEST1'] = Array ( 'Description' => 'This is a Description', 'Version' => '1.11', 'fields' => Array( 'ID' => array( 'type' => 'int', 'length' =>'11', 'misc' =>'auto_increment' ), 'DATA' => array( 'type' => 'varchar', ' length' => '255' ) ); $var['TEST2'] = Array ( 'Description' =? 'This is the 2nd Description', 'Version' => '2.1', 'fields' => Array( 'ID' => array( 'type' => 'int', 'length' =>'11', 'misc' =>'auto_increment' ), 'DATA' => array( 'type' => 'varchar', ' length' => '255' ) ) // The function <?php $obj = 'TEST1'; print_r($schema[$obj]); // <-- Fives me output. But calling the function doesn't. echo buildStructure($obj); /** * @TODO to add auto_inc support */ function buildStructure($obj) { $output = ''; $primaryKey = $schema["{$obj}"]['primary key']; foreach ($schema["{$obj}"]['fields'] as $name => $tag) // #### ERROR #### Invalid argument supplied for foreach() { $type = $tag['type']; $length = $tag['length']; $default = $tag['default']; $description = $tag['description']; $length = (isset($length)) ? "({$length})" : ''; $default = ($default == NULL ) ? "NULL" : $default; $output .= "`{$name}` {$type}{$length} DEFAULT {$default} COMMENT `{$DESCRIPTION}`, "; } return $output; }

    Read the article

  • C#: Need one of my classes to trigger an event in another class to update a text box

    - by Matt
    Total n00b to C# and events although I have been programming for a while. I have a class containing a text box. This class creates an instance of a communication manager class that is receiving frames from the Serial Port. I have this all working fine. Every time a frame is received and its data extracted, I want a method to run in my class with the text box in order to append this frame data to the text box. So, without posting all of my code I have my form class... public partial class Form1 : Form { CommManager comm; public Form1() { InitializeComponent(); comm = new CommManager(); } private void updateTextBox() { //get new values and update textbox } . . . and I have my CommManager class class CommManager { //here we manage the comms, recieve the data and parse the frame } SO... essentially, when I parse that frame, I need the updateTextBox method from the form class to run. I'm guessing this is possible with events but I can't seem to get it to work. I tried adding an event handler in the form class after creating the instance of CommManager as below... comm = new CommManager(); comm.framePopulated += new EventHandler(updateTextBox); ...but I must be doing this wrong as the compiler doesn't like it... Any ideas?!

    Read the article

  • C Typecast: How to

    - by Jean
    #include<stdio.h> int main(void) { unsigned short a,e,f ; // 2 bytes data type unsigned int temp1,temp2,temp4; // 4 bytes data type unsigned long temp3; // 8 bytes data type a=0xFFFF; e=((a*a)+(a*a))/(2*a); // Line 8 //e=(((unsigned long)(a*a)+(unsigned long)(a*a)))/(unsigned int)(2*a); temp1=a*a; temp2=a*a; temp3=(unsigned long)temp1+(unsigned long)temp2; // Line 14 temp4=2*a; f=temp3/temp4; printf("%u,%u,%lu,%u,%u,%u,%u\n",temp1,temp2,temp3,temp4,e,f,a); return(1); } How do I fix the arithmetic (At Line 8 by appropriate typecasting of intermediate results) so that overflows are taken care of ? Currently it prints 65534 instead of expected 65535. Why is the typecast necessary for Line 14 ?

    Read the article

  • Problem validating an XSD file: The content type of a derived type and that of its base must both be mixed or both be element-only

    - by Paulo Tavares
    Hi, I have following XML schema: <?xml version="1.0" encoding="UTF-8"?> <schema xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0" targetNamespace="urn:ietf:params:xml:ns:netconf:base:1.0" ... <complexType name="dataInlineType"> <xs:complexContent> <xs:extension base="xs:anyType"/> </xs:complexContent> </complexType> <complexType name="get-config_output_type__" > <complexContent> <extension base="netconf:dataInlineType"> <sequence> <element name="data"> <complexType> <sequence> <element name="__.get-config.output.data.A__" minOccurs="0" maxOccurs="unbounded" /> </sequence> </complexType> </element> <element name="__.get-config.A__" minOccurs="0" maxOccurs="unbounded"/> </sequence> </extension> </complexContent> And I getting the folling error: cos-ct-extends.1.4.3.2.2.1.a: The content type of a derived type and that of its base must both be mixed or both be element-only. Type 'get-config_output_type__' is element only, but its base type is not. If I put both elements mixed="true" I get another error: cos-nonambig: WC[##any] and "urn:ietf:params:xml:ns:netconf:base:1.0":data (or elements from their substitution group) violate "Unique Particle Attribution". During validation against this schema, ambiguity would be created for those two particles. I using the Eclipse to validate my schema, so what can I do?

    Read the article

  • Import CSV to class structure as the user defines

    - by Assimilater
    I have a contact manager program and I would like to offer the feature to import csv files. The problem is that different data sources order the fields in different ways. I thought of programming an interface for the user to tell it the field order and how to handle exceptions. Here is an example line in one of many possible field orders: "ID#","Name","Rank","Address1","Address2","City","State","Country","Zip","Phone#","Email","Join Date","Sponsor ID","Sponsor Name" "Z1234","Call, Anson","STU","1234 E. 6578 S.","","Somecity","TX","United States","012345","000-000-0000","[email protected]","5/24/2010","z12343","Quantum Independence" Notice that in one data field "Name" there is a comma to separate last name and first name and in another there is not. My plan is to have a line for each field (ie ID, Name, City etc.) and a statement "import to" and list box with options like: Don't Import, BusinessJoin Date, First Name, Zip and the program recognizes those as properties of an object... I'd also like the user to be able to record preset field orders so they can re-use them for csv files from the same download source. Then I also need it to check if a record all ready exists (is there a record for Anson Call all ready?) and allow the user to tell it what to do if there is a record (ie mailing address may have changes, so if that field is filled overwrite it, or this mailing address is invalid, leave the current data untouched for this person, overwrite the rest). While I'm capable of coding this...i'm not very excited about it and I'm wondering if there's a tool or set of tools out there to all ready perform most of this functionality... I hope this makes sense...

    Read the article

  • How does asp.net MVC remember my false values on postback?

    - by Michel
    Hi, This is working, but how??? I have a controller action for a post: [AcceptVerbs(HttpVerbs.Post )] public ActionResult Edit(Person person) { bool isvalid = ModelState.IsValid; etc. The Person object has a property BirthDate, type DateTime. When i enter some invalid data in the form, say 'blabla' which is obvious not a valid Datetime, it fills all the (other) Person properties with the correct data and the BirthDate property with a new blank DateTime. The bool isvalid has the value 'false'. So far so good. Then i do this: return View(p); and in the view i have this: <%= Html.TextBox("BirthDate", String.Format("{0:g}", Model.BirthDate)) %> <%= Html.ValidationMessage("BirthDate", "*") %> Ant there it comes: i EXPECTED the model to contain the new, blank DateTime because i didn't put any new data in. Second, when the View displays something, it must be a DateTime, because Model.BirthDate can't hold anything but a DateTime. But to my surprise, it shows a textbox with the 'blabla' value! (and the red * behind it) Which ofcourse is nice because the user can seee what he typed wrong, but how can that (blabla)string be transferred to the View in a DateTime field?

    Read the article

  • Generated images fail to load in browser

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • Call Oracle package function using Odbc from C#

    - by Paolo Tedesco
    I have a function defined inside an Oracle package: CREATE OR REPLACE PACKAGE BODY TESTUSER.TESTPKG as FUNCTION testfunc(n IN NUMBER) RETURN NUMBER as begin return n + 1; end testfunc; end testpkg; / How can I call it from C# using Odbc? I tried the following: using System; using System.Data; using System.Data.Odbc; class Program { static void Main(string[] args) { using (OdbcConnection connection = new OdbcConnection("DSN=testdb;UID=testuser;PWD=testpwd")) { connection.Open(); OdbcCommand command = new OdbcCommand("TESTUSER.TESTPKG.testfunc", connection); command.CommandType = System.Data.CommandType.StoredProcedure; command.Parameters.Add("ret", OdbcType.Int).Direction = ParameterDirection.ReturnValue; command.Parameters.Add("n", OdbcType.Int).Direction = ParameterDirection.Input; command.Parameters["n"].Value = 42; command.ExecuteNonQuery(); Console.WriteLine(command.Parameters["ret"].Value); } } } But I get an exception saying "Invalid SQL Statement". What am I doing wrong?

    Read the article

  • find consecutive nonzero values

    - by thymeandspace
    I am trying to write a simple MATLAB program that will find the first chain (more than 70) of consecutive nonzero values and return the starting value of that consecutive chain. I am working with movement data from a joystick and there are a few thousand rows of data with a mix of zeros and nonzero values before the actual trial begins (coming from subjects slightly moving the joystick before the trial actually started). I need to get rid of these rows before I can start analyzing the movement from the trials. I am sure this is a relatively simple thing to do so I was hoping someone could offer insight. Thank you in advance -Lilly EDIT: Here's what I tried: s = zeros(size(x1)); for i=2:length(x1) if(x1(i-1) ~= 0) s(i) = 1 + s(i-1); end end display(S); for a vector x1 which has a max chain of 72 but I dont know how to find the max chain and return its first value, so I know where to trim. I also really don't think this is the best strategy, since the max chain in my data will be tens of thousands of values. Thanks for helping me edit, Steve. :)

    Read the article

  • Read from plist insted of Code array

    - by BoSoud
    Hi Guys am Using [URL="http://www.iphonesdkarticles.com/2009/01/uitableview-searching-table-view.html"]UITableView - Searching table view[/URL] its really nice easy tutorial but i really have bad time try to read from plist that what i did change - (void)viewDidLoad { [super viewDidLoad]; //Initialize the array. listOfItems = [[NSMutableArray alloc] init]; TableViewAppDelegate *AppDelegate = (TableViewAppDelegate *)[[UIApplication sharedApplication] delegate]; listOfItems = [AppDelegate.data objectForKey:@"Countries"]; //Initialize the copy array. copyListOfItems = [[NSMutableArray alloc] init]; //Set the title self.navigationItem.title = @"Countries"; //Add the search bar self.tableView.tableHeaderView = searchBar; searchBar.autocorrectionType = UITextAutocorrectionTypeNo; searching = NO; letUserSelectRow = YES; } and This How i read plist from My AppDelegate.m - (void)applicationDidFinishLaunching:(UIApplication *)application { NSString *Path = [[NSBundle mainBundle] bundlePath]; NSString *DataPath = [Path stringByAppendingPathComponent:@"Data.plist"]; NSDictionary *tempDict = [[NSDictionary alloc] initWithContentsOfFile:DataPath]; self.data = tempDict; [tempDict release]; // Configure and show the window [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } and this my plist <plist version="1.0"> <dict> <key>Countries</key> <array> <array> <string>USA</string> </array> <dict/> </array> </dict> </plist> and i get this error *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFArray objectForKey:]: unrecognized selector sent to instance 0x1809dc0' help please am stock with this

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • Cross Thread problem C#

    - by Frederik Witte
    Hello people - I got this code (lg_log is a listbox, and i want it to log the start_server.bat) Here is the code i got: public void bt_play_Click(object sender, EventArgs e) { lg_log.Items.Add("Starting Mineme server .."); string directory = Directory.GetCurrentDirectory(); var info = new ProcessStartInfo(directory + @"\start_base.bat") {UseShellExecute = false, RedirectStandardOutput = true, CreateNoWindow = true, WorkingDirectory = directory + @"\Servers\Base"}; var proc = new Process { StartInfo = info, EnableRaisingEvents = true }; proc.OutputDataReceived += (obj, args) => { if (args.Data != null) { lg_log.Items.Add(args.Data); } }; proc.Start(); proc.BeginOutputReadLine(); lg_log.Items.Add("Server is now running!"); proc.WaitForExit(); } When i run this, i'll get an error .. Anybody can help me? I'll rate the answer up! :D Edit: The error i get is this: System.InvalidOperationException Hope it helps :) The error comes at the lg_log.Items.Add(args.Data); code line

    Read the article

  • Load page for validation but do not display it to user in ASP.NET

    - by Kevin
    We have a site requiring users pay $2 to view the details of a record. We occasionally get complaints because we send them to the payment page, and once they pay it turns out the record isn't valid, or it was lost, or the data couldn't be generated. So we want to add in a check to ensure the page constructs properly before the user is required to pay for it. However, we don't want the user to have access to the page until they pay for it. Is there anything in ASP.NET 3.5 or just general web design that would allow something like this? The data on the record is real time and computed on a backend server before sent to the client. Occasionally this computation fails for whatever reason. Our alternative is to call all of the loading methods and validate the data, then redirect them to the payment page. The problem is A) this will be a relatively involved process rewriting all of these methods to return validation information, and B) it still doesn't guarentee us the page will load properly. Any thoughts?

    Read the article

  • Please help with IFrame callback

    - by Code Sherpa
    Hi - thanks for clicking. I am trying to get status feedback using an IFrame for file uploads. I am not trying to get progress or percentages - just when a file is done uploading and if it was a success or failure. THE PROBLEM is that I can't seem to get the server response to appear on the client. I have to following design: I have an iframe on my page: <iframe id="target_frame" src="" style="border:0px; width:0px; height:0px"></iframe> The form tag points to it: <form enctype="multipart/form-data" id="fileUploadForm" name="fileUploadForm" action="picupload.aspx" method="post" target="target_frame"> And the submit button starts a file upload via the iframe: <input id="submit" type="submit" value="upload" /> In the picupload.aspx.cs file, I have a method that returns dynamic data. I then send it to the client: message = data; Response.Write(String.Format("<script language='javascript' type='text/javascript'>window.parent.handleResponse('{0}');</script>", message)); On the client, I have a response handler: function handleResponse(msg) { document.getElementById('statusDiv').innerHTML = msg; } My intent is to see the msg value change for each uploaded file but I never see anything appear in statusDiv, let alone dynamically changing messages. Can somebody please help??

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • Elegant code question: How to avoid creating unneeded object

    - by SeaDrive
    The root of my question is that the C# compiler is too smart. It detects a path via which an object could be undefined, so demands that I fill it. In the code, I look at the tables in a DataSet to see if there is one that I want. If not, I create a new one. I know that dtOut will always be assigned a value, but the the compiler is not happy unless it's assigned a value when declared. This is inelegant. How do I rewrite this in a more elegant way? System.Data.DataTable dtOut = new System.Data.DataTable(); . . // find table with tablename = grp // if none, create new table bool bTableFound = false; foreach (System.Data.DataTable d1 in dsOut.Tables) { string d1_name = d1.TableName; if (d1_name.Equals(grp)) { dtOut = d1; bTableFound = true; break; } } if (!bTableFound) dtOut = RptTable(grp);

    Read the article

  • Drupal Ubercart: error in passing values back to the Content Type after checkout

    - by user512826
    I am trying to set up event registration in a drupal site using Ubercart + the UC Node Checkout Module. I have followed the instructions provided in http://drupaleasy.com/blogs/ultimike/2009/03/event-registration-ubercart. However I seem to be unable to pass the Order ID and Payment Status back to the node. I have created a conditional action that on node checkout executes the following PHP code: I am using the following code to update the node on checkout - but nothing happens: if (isset($order)) { foreach ($order->products as $product) { if (isset($product->data['node_checkout_nid'])) { $node = node_load($product->data['node_checkout_nid']); $node->field_status['0']['value'] = 1; $node->field_orderid['0']['value'] = $order->order_id; node_save($node); } } } I know the conditional action is working because it prints dsm('hello world') messages on node checkout - however when I include a dsm($node) or dsm($product) in the PHP code, they return blank. Also when I go back to my product and click the 'Devel' tab, the 'data' string contains the following characters: a:1:{s:13:"form_build_id";s:37:"form-3ccc03345f4832c69666a89c560de940";} In this link http://www.ubercart.org/forum/support/10951/node_checkout_issue I found someone else with the same issue, but I have been unable to replicate his solution. Can anybody please help? Thanks so much!

    Read the article

< Previous Page | 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015  | Next Page >