Search Results

Search found 21063 results on 843 pages for 'stochastic process'.

Page 594/843 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Import orders file on magento enterprise or community (product with customs options)

    - by wil
    Hello, We need to import some orders file on magento enterprise. In our file, products contains customs Options. We tried to make an extension but we have some problems to import Customs options. The import of standard product is successful but not for the product with customs options. For customs option, missing "info_buyRequest" valu in database. The technical support of magento we responded "the import process currently can't handle importing products with custom options". Magento use custom options when ordering a product with customs options on website. What features do magento use to fill in the fields "info_buyRequest" and "Product_options" when ordering? Have you see a extension pack for import file order with product contains customs options? Thanks for your help.

    Read the article

  • JNI AttachCurrentThread NULLs the jenv

    - by Damg
    Hello all, I'm currently in the process of adding JNI functionality into a legacy delphi app. In a single-threaded environment everything works fine, but as soon as I move into multi-threaded environment, things start to become hairy. My problem is that calling JavaVM^.AttachCurrentThread( JavaVM, @JEnv, nil ); returns 0, but puts the JEnv pointer to nil. I have no idea why jvm.dll should return a NULL pointer. Is there anything I am missing? Thank you in advance -- damg PS: * Environment: WinXP + JDK 1.6 * Using JNI.pas from http://www.pacifier.com/~mmead/jni/delphi/

    Read the article

  • Workflow Foundation: Asynchronous operations (lengthy network I/O)

    - by StormianRootSolver
    I have to create an application that will be started a few times per day (it's non - interactive). To operate, it needs LARGE amounts of data from the Internet (megabytes) via a rather slow connection, so the WCF service calls take quite some time. At the same time, it needs to perform local calculations and has a sophisticated initialization process. So, what I want to do is to create a workflow that asynchronously fetches the data (takes a few minutes) while already initializing / calculating locally. Is there a way to accomplish this?

    Read the article

  • Puzzle: find the minimum number of weights

    - by avd
    I came across this question: say given two weights 1 and 3, u can weigh 1,2 (by 3-1),3,4 (by 3+1). Now find the minimum number of weights so that you can measure 1 to 1000. So the answer was 1,3,9,27... I want to know how do you arrive at such a solution means powers of 3. What is the thought process? Source: http://classic-puzzles.blogspot.com/search/label/Google%20Interview%20Puzzles Solution: http://classic-puzzles.blogspot.com/2006/12/solution-to-shopkeeper-problem.html

    Read the article

  • Good overview tool / board for visualizing Subversion branch acitivity?

    - by Sam
    Our team is sometimes finding it a bit confusing and time-consuming to figure out which subversion operations have been perrformed on our different branches in Subversion. Example, when has the Development branch last been merged into the Trunk? When was this particular Tag created, based on what branch etc etc. All of this information can of course be extracted from the Subversion Log, but thats always a manual, time-consuming and error-prone process. Simplest solution seems to be a simple whiteboard with a visualization of all the different branches/tags/trunk in Subversion and people drawing on it, whenever something significant happens. But we're not averse to finding some kind of a digital solution as well, stored centrally. Obviously both systems depend on people actually maintaining the model, but you'll always more or less have that. What do you use as best practice for keeping a clear view on all Subversion operations in the current Sprint (or beyond)?

    Read the article

  • Writing to CSV issue in Spyder

    - by 0003
    I am doing the Kaggle Titanic beginner contest. I generally work in Spyder IDE, but I came across a weird issue. The expected output is supposed to be 418 rows. When I run the script from terminal the output I get is 418 rows (as expected). When I run it in Spyder IDE the output is 408 rows not 418. When I re-run it in the current python process, it outputs the expected 418 rows. I posted a redacted portion of the code that has all of the relevant bits. Any ideas? import csv import numpy as np csvFile = open("/train.csv","ra") csvFile = csv.reader(csvFile) header = csvFile.next() testFile = open("/test.csv","ra") testFile = csv.reader(testFile) testHeader = testFile.next() writeFile = open("/gendermodelDebug.csv", "wb") writeFile = csv.writer(writeFile) count = 0 for row in testFile: if row[3] == 'male': do something to row writeFile.writerow(row) count += 1 elif row[3] == 'female': do something to row writeFile.writerow(row) count += 1 else: raise ValueError("Did not find a male or female in %s" % row)

    Read the article

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • Is referencing a selector faster in jquery than actually calling the selector? if so, how much does it make a difference?

    - by anthonypliu
    Hi, I have this code: $(preview-button).click(...) $(preview-button).slide(...) $(preview-button).whatever(...) Is it a better practice to do this: var preview-button = $(preview-button); preview-button.click(...); preview-button.click(...); preview-button).slide(...); preview-button.whatever(...); It probably would be better practice to do this for the sake of keeping code clean and modular, BUT does it make a difference performance wise? Does one take longer to process than the other? Thanks guys.

    Read the article

  • asynchronous writing and reading of a file

    - by tazim
    hi, I have two processes. 1.) One processes is redirecting output of some unix command to a file on server side.the data is always appended to the file eg : find / > tmp.txt 2.)Another process is opening and reading the same file and storing it in a string and sending the entire string to the client Now, this things take simultaneously. I am using python. Any suggestion as in what can be possible ways to implement this scenario . Please explain with sample code . Thanks in advance . Tazim.

    Read the article

  • Google Contacts Data API and PHP

    - by pako
    I'm developing a PHP application to retrieve the list of contacts from a GMail account. I'm looking for a solution which would enable the user of my application to provide the login and password to their Gmail account in my application (as opposed to getting redirected to Google) and then automatically do the retrieval. The retrieval process can be run in PHP or JavaScript (which would then feed the list of contacts back to PHP using Ajax). Is it possible to do that? Which JavaScript API should I use for that? Can someone point me at the right chapter in Google Contacts Data API documentation?

    Read the article

  • What is the fastest way to compare 2 rows in SQL?

    - by Swoosh
    I have 2 different databases. Upon changing something in the big one (i don't have access to), i get some rows imported in my databases in a similar HUGE table. I have a job checking for records in this table, and if any, execute a stored procedure, process and delete from table. Performance. (Huge amount of data) I would like to know what is the fastest way to know if something has changed using let's say 2 imported rows with 100 columns each. Don't have FK-s, don't need. Chances are, that even though I have records in my table, nothing has actually changed. Also. Let's say there is actually changed something. Is it possible for example to check only for changes inside datetime columns? Thanks

    Read the article

  • Creating an n tiered application

    - by aaron
    I am researching architecture for a project that will be started next year. It is mainly a c# web app, but there will be a service layer so that it can talk to our facebook/iphone app. There are a few long running processes, which means that I will be creating a windows process that can handle those. I’m thinking of putting the entire app in the windows service instead of just the long running processes. Asp - wcf - bll Vs Asp - bll I know this will be more scalable. But it is probably overkill as everything will be running on the same box, even the database. This could change down the road if the server can’t handle the traffic like marketing says it will. I don’t have access to production hardware, just my crappy testing box and my local machine. Has anyone decided to go down this route? But mostly, what is the best way to test both methods to get some metrics?

    Read the article

  • Import small number of records from a very large CSV file in Biztalk 2006

    - by rwmnau
    I have a Biztalk project that imports an incoming CSV file and dumps it to a database table. The import works fine, but I only need to keep about 200-300 records from a file with upwards of a million rows. My orchestration discards these rows, but the problem is that the flat file I'm importing is still 250MB, and when converted to XML using a regular flat file pipeline, it takes hours to process and sometimes causes the server to run out memory. Is there something I can do to have the Custom Pipeline itself discard rows I don't care about? The very first item in each CSV row is one of a few strings, and I only want to keep rows that start with a certain string. Thanks for any help you're able to provide.

    Read the article

  • How to have partial incremental synchronizations based on a GUID?

    - by Gonçalo Veiga
    I need to synchronize an SQL Server database to Oracle through an Oracle Transparent Gateway. The synchronization is performed in batches, so I need to get the next set of data from the point where I left off. The problem I'm having is that the only field I have in the source, to help me, is a GUID. If it were a number I could just order by it, keep the last one processed and restart the process by getting the records which are my recorded number. This won't work with a GUID. Any ideas?

    Read the article

  • Pointer to another classs as a property

    - by arjacsoh
    Why I receive an error when I try to create a property to another class through a pointer like that: #ifndef SQUARE_H #define SQUARE_H #include <string> //using namespace std; #include "Player.h" class Square { public: Square(int); void process(); protected: int ID; Player* PlayerOn; <--- }; #endif and the Player class is : #ifndef PLAYER_H #define PLAYER_H #include <string> //using namespace std; #include "Square.h" class Player { public: Player(int,int); // ~Player(void); int playDice(); private: int ID; int money; }; #endif I receive: syntax error missing ; before * (on the declaration of Player* PlayerOn;) and missing type specifier (on the same line...)

    Read the article

  • Test plans and how best to write them

    - by Karim
    We're trying to figure out the best way to write tests in our test plan. Specifically, when writing a test that is meant to be used by anyone including QA staff, should the steps in the test be very specific or more broad giving the tester more leeway in how the task can be accomplished. As a very simple example, if you're testing opening a document in word processing document, should the test read: Using the mouse, open the file menu Choose "Open File..." in the file menu In the open file dialog that appears, navigate to x and double-click the document called y OR Bring up the file open dialog Open the file y Now I realize one answer is probably going to be "it depends on what you're trying to test" but I'm trying to answer a broader question here: If the test steps are too specific do we risk a) making the testing process to laborious and tedious and more importantly b) do we risk missing something because we wrote down too specific a path to achieve a goal. Alternatively, if we make it broad do we depend too much on the whims of the tester at the time and lose crucial testing of paths that are more common to customers/clients?

    Read the article

  • What is the fastest way to write hundreds of files to disk using C#?

    - by Ehsan
    My program should write hundreds of files to disk, received by external resources (network) each file is a simple document that I'm currently store it with the name of GUID in a specific folder but creating hundred files, writing, closing is a lengthy process. Is there any better way to store these amount of files to disk? I've come to a solution, but I don't know if it is the best. First, I create 2 files, one of them is like allocation table and the second one is a huge file storing all the content of my documents. But reading from this file would be a nightmare; maybe a memory-mapped file technique could help. Could working with 30GB or more create a problem?

    Read the article

  • Is there such a thing as a dektop application event aggregator, similar to that used in Prism?

    - by brownj
    The event aggregator in Prism is great, and allows loosely coupled communication between modules within a composite application. Does such a thing exist that allows the same thing to happen between standalone applications running on a user's desktop? I could imagine developing a solution that uses WCF with TCP binding and running inside Windows Process Activation Service. Client applications could subscribe or publish events to this service as required and it would ensure all other listeners get notified of events as appropriate. Using TCP would enable event messages to be pushed out to clients without the need for polling, ensuring messages are delivered very quickly. I can't help but think though that such a thing would already exist... Is anyone aware of something like this, or have any advice on how it may be best implemented?

    Read the article

  • More threads and orientation changes questions.

    - by synic
    When it comes to threads and orientation changes, it seems the normal thing to do is something like this: public class Bwent extends Activity { private static Bwent instance; @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); instance = this; } //... That way, if you're making a network request with a thread, and someone changes the orientation of the phone, the thread will know to use the new Activity. However, is it possible that the thread could finish during the time Android is destroying the old Activity and creating a new one? Is there a moment in the process where the thread still might be pointing to the wrong Activity, or a partially destroyed activity? It seems like there shouldn't be, but even using a Handler created in the main thread, I'm having intermittent issues with a thread trying to update an object that no longer exists. It's rare, but it does happen.

    Read the article

  • How to make IIS wait for WCF service gets ready?

    - by Kamarey
    I have a WCF service hosted in IIS 7. It takes some minutes until this service finish to load its data and ready for external calls. Data loads in internal thread. The problem is that from IIS point the service is ready just after it was activated (by some call), and it process a request without waiting for data to be loaded. Is it possible to tell IIS that the service is still loading and make this service unavailable for requests? No problem if such request will throw an exception.

    Read the article

  • create a model in create action from a class

    - by Pontek
    As a newbie to rails I can't find how to solve my issue ^^ I want to create a VideoPost from a form with a text field containing a video url (like youtube) I'm getting information on the video thanks to the gem https://github.com/thibaudgg/video_info And I want to save thoses information using a model of mine (VideoInformation). But I don't know how the create process should work. Thanks for any help ! I'm trying to create a VideoPost in VideoPostsController like this : def create video_info = VideoInfo.new(params[:video_url]) video_information = VideoInformation.create(video_info) #undefined method `stringify_keys' for #<Youtube:0x00000006a24120> if video_information.save @video_post = current_user.video_posts.build(video_information) end end My VideoPost model : # Table name: video_posts # # id :integer not null, primary key # user_id :integer # video_information_id :integer # created_at :datetime not null # updated_at :datetime not null My VideoInformation model (which got same attributes name than VideoInfo gem) : # Table name: video_informations # # id :integer not null, primary key # title :string(255) # description :text # keywords :text # duration :integer # video_url :string(255) # thumbnail_small :string(255) # thumbnail_large :string(255) # created_at :datetime not null # updated_at :datetime not null

    Read the article

  • better for-loop syntax for detecting empty sequences?

    - by Dmitry Beransky
    Hi, Is there a better way to write the following: row_counter = 0 for item in iterable_sequence: # do stuff with the item counter += 1 if not row_counter: # handle the empty-sequence-case Please keep in mind that I can't use len(iterable_sequence) because 1) not all sequences have known lengths; 2) in some cases calling len() may trigger loading of the sequence's items into memory (as the case would be with sql query results). The reason I ask is that I'm simply curious if there is a way to make above more concise and idiomatic. What I'm looking for is along the lines of: for item in sequence: #process item *else*: #handle the empty sequence case (assuming "else" here worked only on empty sequences, which I know it doesn't)

    Read the article

  • Reverse engineering a custom data file

    - by kerchingo
    At my place of work we have a legacy document management system that for various reasons is now unsupported by the developers. I have been asked to look into extracting the documents contained in this system to eventually be imported into a new 3rd party system. From tracing and process monitoring I have determined that the document images (mainly tiff files) are stored in a number of 1.5GB files. These files seem to be read from a specific offset and then written to a tmp file that is then served via a web app to the client, and then deleted. I guess I am looking for suggestions as to how I can inspect these large files that contain the tiff images, and eventually extract and write them to individual files.

    Read the article

  • Which tool should I use for finding out my memory allocation in Perl?

    - by Colin Newell
    I've slurped in a big file using File::Slurp but given the size of the file I can see that I must have it in memory twice or perhaps it's getting inflated by being turned into 16 bit unicode. How can I best diagnose that sort of a problem in Perl? The file I pulled in is 800mb in size and my perl process that's analysing that data has roughly 1.6gb allocated at runtime. I realise that I may be wrong about my reason for the problem but I'm not sure the most efficient way to prove/disprove my theory.

    Read the article

  • Microsoft SQL Server 2005/2008 SSIS are oversized

    - by Ice
    In this case i'm old style and loved 'my fathers DTS' from SQL 2000. Most of the cases i have to import a flatfile into a table. In a second step i use some procedures (with the new MERGE-Statement) to process the imported content. For Export, i define a export-table and populate it with a store proc (containing a MERGE-Statement) and in a second step the content will be exported to a flat file. In some cases there is no flat file because there is annother sql-server or in rare cases an ODBC-Connection to a sybase or similar. What do you think? When it comes to complex ETL-Stuff the SSIS may be the right tool...but i haven't seen such a case yet.

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >