Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 344/402 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • trouble connecting to MySql DB (PHP)

    - by user332817
    Hi I have the following PHP code to connect to my db. <?php ob_start(); $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); ?> however I get the following error: Warning: mysql_connect() [function.mysql-connect]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Fatal error: Maximum execution time of 30 seconds exceeded in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 I am able to add a db/tables via phpmyadmin but I cant connect using php. here is a screenshot of my phpmyadmin page: http://img294.imageshack.us/img294/1589/sqls.jpg any help would be appreciated, thanks in advance.

    Read the article

  • Function Returning Negative Value

    - by Geowil
    I still have not run it through enough tests however for some reason, using certain non-negative values, this function will sometimes pass back a negative value. I have done a lot of manual testing in calculator with different values but I have yet to have it display this same behavior. I was wondering if someone would take a look at see if I am missing something. float calcPop(int popRand1, int popRand2, int popRand3, float pERand, float pSRand) { return ((((((23000 * popRand1) * popRand2) * pERand) * pSRand) * popRand3) / 8); } The variables are all contain randomly generated values: popRand1: between 1 and 30 popRand2: between 10 and 30 popRand3: between 50 and 100 pSRand: between 1 and 1000 pERand: between 1.0f and 5500.0f which is then multiplied by 0.001f before being passed to the function above Edit: Alright so after following the execution a bit more closely it is not the fault of this function directly. It produces an infinitely positive float which then flips negative when I use this code later on: pPMax = (int)pPStore; pPStore is a float that holds popCalc's return. So the question now is, how do I stop the formula from doing this? Testing even with very high values in Calculator has never displayed this behavior. Is there something in how the compiler processes the order of operations that is causing this or are my values simply just going too high? If the later I could just increase the division to 16 I think.

    Read the article

  • read files from directory and filter files from Java

    - by Adnan
    The following codes goes through all directories and sub-directories and outputs just .java files; import java.io.File; public class DirectoryReader { private static String extension = "none"; private static String fileName; public static void main(String[] args ){ String dir = "C:/tmp"; File aFile = new File(dir); ReadDirectory(aFile); } private static void ReadDirectory(File aFile) { File[] listOfFiles = aFile.listFiles(); if (aFile.isDirectory()) { listOfFiles = aFile.listFiles(); if(listOfFiles!=null) { for(int i=0; i < listOfFiles.length; i++ ) { if (listOfFiles[i].isFile()) { fileName = listOfFiles[i].toString(); int dotPos = fileName.lastIndexOf("."); if (dotPos > 0) { extension = fileName.substring(dotPos); } if (extension.equals(".java")) { System.out.println("FILE:" + listOfFiles[i] ); } } if(listOfFiles[i].isDirectory()) { ReadDirectory(listOfFiles[i]); } } } } } } Is this efficient? What could be done to increase the speed? All ideas are welcome.

    Read the article

  • Android Signal 11 (SIGSEGV)

    - by Naturjoghurt
    I read many posts here and on other sites, but can not find the problem creating my Error: I use an AsyncTask because I want to easily manipulate the UI Thread before and after Execution. In doInBackground I create a ThreadPoolExecutor and execute Runnables. If I only execute 1 Runnable with the Executor, there is no Problem, but if I execute another Runnable I get following Error: 06-26 18:00:42.288: A/libc(25073): Fatal signal 11 (SIGSEGV) at 0x7f486162 (code=1), thread 25106 (pool-1-thread-2) 06-26 18:00:42.304: D/dalvikvm(25073): GC_CONCURRENT freed 119K, 2% free 8908K/9056K, paused 4ms+4ms, total 45ms 06-26 18:00:42.327: I/System.out(25073): In Check All with Prefix: a and Length: 4 06-26 18:00:42.390: I/DEBUG(126): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 06-26 18:00:42.390: I/DEBUG(126): Build fingerprint: 'google/yakju/maguro:4.2.2/JDQ39/573038:user/release-keys' 06-26 18:00:42.390: I/DEBUG(126): Revision: '9' 06-26 18:00:42.390: I/DEBUG(126): pid: 25073, tid: 25106, name: pool-1-thread-2 >>> de.uni_duesseldorf.cn.distributed_computing2 <<< 06-26 18:00:42.390: I/DEBUG(126): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 7f486162 ... 06-26 18:00:42.538: I/DEBUG(126): memory map around fault addr 7f486162: 06-26 18:00:42.538: I/DEBUG(126): 60292000-60391000 06-26 18:00:42.538: I/DEBUG(126): (no map for address) 06-26 18:00:42.538: I/DEBUG(126): bed14000-bed35000 [stack] I set up the ThreadPoolExecutor like this: // numberOfPackages: Number of Runnables to be executed public void initializeThreadPoolExecutor (int numberOfPackages) { int corePoolSize = Runtime.getRuntime().availableProcessors(); int maxPoolSize = numberOfPackages; long keepAliveTime = 60; final BlockingQueue workingQueue = new LinkedBlockingQueue(); executor = new ThreadPoolExecutor(corePoolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, workingQueue); } I have no clue, why it fails when starting the second Thread. Maybe Memory Leaks? Any Help appreciated. Thanks in Advance

    Read the article

  • C/C++ I18N mbstowcs question

    - by bogertron
    I am working on internationalizing the input for a C/C++ application. I have currently hit an issue with converting from a multi-byte string to wide character string. The code needs to be cross platform compatible, so I am using mbstowcs and wcstombs as much as possible. I am currently working on a WIN32 machine and I have set the locale to a non-english locale (Japanese). When I attempt to convert a multibyte character string, I seem to be having some conversion issues. Here is an example of the code: int main(int argc, char** argv) { wchar_t *wcsVal = NULL; char *mbsVal = NULL; /* Get the current code page, in my case 932, runs only on windows */ TCHAR szCodePage[10]; int cch= GetLocaleInfo( GetSystemDefaultLCID(), LOCALE_IDEFAULTANSICODEPAGE, szCodePage, sizeof(szCodePage)); /* verify locale is set */ if (setlocale(LC_CTYPE, "") == 0) { fprintf(stderr, "Failed to set locale\n"); return 1; } mbsVal = argv[1]; /* validate multibyte string and convert to wide character */ int size = mbstowcs(NULL, mbsVal, 0); if (size == -1) { printf("Invalid multibyte\n"); return 1; } wcsVal = (wchar_t*) malloc(sizeof(wchar_t) * (size + 1)); if (wcsVal == NULL) { printf("memory issue \n"); return 1; } mbstowcs(wcsVal, szVal, size + 1); wprintf(L"%ls \n", wcsVal); return 0; } At the end of execution, the wide character string does not contain the converted data. I believe that there is an issue with the code page settings, because when i use MultiByteToWideChar and have the current code page sent in EX: MultiByteToWideChar( CP_ACP, 0, mbsVal, -1, wcsVal, size + 1 ); in place of the mbstowcs calls, the conversion succeeds. My question is, how do I use the generic mbstowcs call instead of teh MuliByteToWideChar call?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • F# ref-mutable vars vs object fields

    - by rwallace
    I'm writing a parser in F#, and it needs to be as fast as possible (I'm hoping to parse a 100 MB file in less than a minute). As normal, it uses mutable variables to store the next available character and the next available token (i.e. both the lexer and the parser proper use one unit of lookahead). My current partial implementation uses local variables for these. Since closure variables can't be mutable (anyone know the reason for this?) I've declared them as ref: let rec read file includepath = let c = ref ' ' let k = ref NONE let sb = new StringBuilder() use stream = File.OpenText file let readc() = c := stream.Read() |> char // etc I assume this has some overhead (not much, I know, but I'm trying for maximum speed here), and it's a little inelegant. The most obvious alternative would be to create a parser class object and have the mutable variables be fields in it. Does anyone know which is likely to be faster? Is there any consensus on which is considered better/more idiomatic style? Is there another option I'm missing?

    Read the article

  • Why is my code stopping and not returning an exception?

    - by BeckyLou
    I have some code that starts a couple of threads to let them execute, then uses a while loop to check for the current time passing a set timeout period, or for the correct number of results to have been processed (by checking an int on the class object) (with a Thread.Sleep() to wait between loops) Once the while loop is set to exit, it calls Abort() on the threads and should return data to the function that calls the method. When debugging and stepping through the code, I find there can be exceptions in the code running on the separate threads, and in some cases I handle these appropriately, and at other times I don't want to do anything specific. What I have been seeing is that my code goes into the while loop and the thread sleeps, then nothing is returned from my function, either data or an exception. Code execution just stops completely. Any ideas what could be happening? Code sample: System.Threading.Thread sendThread = new System.Threading.Thread(new System.Threading.ThreadStart(Send)); sendThread.Start(); System.Threading.Thread receiveThread = new System.Threading.Thread(new System.Threading.ThreadStart(Receive)); receiveThread.Start(); // timeout Int32 maxSecondsToProcess = this.searchTotalCount * timeout; DateTime timeoutTime = DateTime.Now.AddSeconds(maxSecondsToProcess); Log("Submit() Timeout time: " + timeoutTime.ToString("yyyyMMdd HHmmss")); // while we're still waiting to receive results & haven't hit the timeout, // keep the threads going while (resultInfos.Count < this.searchTotalCount && DateTime.Now < timeoutTime) { Log("Submit() Waiting..."); System.Threading.Thread.Sleep(10 * 1000); // 1 minute } Log("Submit() Aborting threads"); // <== this log doesn't show up sendThread.Abort(); receiveThread.Abort(); return new List<ResultInfo>(this.resultInfos.Values);

    Read the article

  • Using static variables for Strings

    - by Vivart
    below content is taken from Best practice: Writing efficient code but i didn't understand why private static String x = "example"; faster than private static final String x ="example"; Can anybody explain this. Using static variables for Strings When you define static fields (also called class fields) of type String, you can increase application speed by using static variables (not final) instead of constants (final). The opposite is true for primitive data types, such as int. For example, you might create a String object as follows: private static final String x = "example"; For this static constant (denoted by the final keyword), each time that you use the constant, a temporary String instance is created. The compiler eliminates "x" and replaces it with the string "example" in the bytecode, so that the BlackBerry® Java® Virtual Machine performs a hash table lookup each time that you reference "x". In contrast, for a static variable (no final keyword), the String is created once. The BlackBerry JVM performs the hash table lookup only when it initializes "x", so access is faster. private static String x = "example"; You can use public constants (that is, final fields), but you must mark variables as private.

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Looking for a good C++/.net

    - by Michael Minerva
    I have recently started to feel that I need to greatly improve my C++ skills especially in the realm of .net. I graduated from a good four year university with a degree in computer science about 9 months ago and I have since been doing full time contract work for a small software company in my local area. Most of my work has been done using Java/lisp/cocoa/XML and before that most of my programming in my senior year was in java/C#. I did a decent amount of C++ in my Sophomore year and in my free time before that but I feel that my general knowledge of C++/.net is very lacking for the opportunities that are now coming my way. Can anyone recommend a good book that could help me get up too speed? I feel I do not need a very basic introduction to C++ but something that covers the fundamentals of .net would be good for me. So basically what I need is a book or books that would be good for a .net novice and a C++ developer who is just beyond novice. Also, a book that would help bein an interview by giving me a conversional understanding of C++ would be great. Thanks a lot!.

    Read the article

  • Gradual memory leak and slowdown in loop

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases, and the speed at which frames are processed drops precipitously. (From ~0.5 seconds per frame at the start, to ~2 seconds per frame by the 250th frame.) The only thing I can think can be causing the gradual memory leak is a buildup of the NSAutoreleasePool objects themselves. Am I right in thinking they will only be deallocated when the outer pool is released? If so, is there a better memory management solution here? Creating a pool every run through the loop seems a little hacky. And if not, what is causing the slow memory leak? (It is not NSStrings, and much too slow to be NSImages or NSDatas.) And what could be causing the slowdown?

    Read the article

  • What is the fastest way to check if files are identical?

    - by ojblass
    If you have 1,000,0000 source files, you suspect they are all the same, and you want to compare them what is the current fasted method to compare those files? Assume they are Java files and platform where the comparison is done is not important. cksum is making me cry. When I mean identical I mean ALL identical. Update: I know about generating checksums. diff is laughable ... I want speed. Update: Don't get stuck on the fact they are source files. Pretend for example you took a million runs of a program with very regulated output. You want to prove all 1,000,000 versions of the output are the same. Update: read the number of blocks rather than bytes? Immediatly throw out those? Is that faster than finding the number of bytes? Update: Is this ANY different than the fastest way to compare two files?

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • Urgent Help not showing the grid on the page..

    - by kumar
    hello Friends, I have two user controlers 1) name.ascx 2) friends.ascx in both controlers Code is same just changing the Grid name and columns but URL pointing is differnt.. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<JQGridMVCExamples.Models.OrdersJqGridModel>" %> <div> <%= Html.Trirand().JQGrid(Model.OrdersGrid, "JQGrid1") %> </div> <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<JQGridMVCExamples.Models.OrdersJqGridModel2>" %> <div> <%= Html.Trirand().JQGrid(Model.OrdersGrid2, "JQGrid2") %> </div> But what hapeening is for second gri JQGrid2 is not showing up.. but First Grid is showing.. when i do first time second grid execution is not shoiwng up but when I do after first grid.. I am getting the result but Frist grid columns I am getting not the second grid columns.. can any body have sujjestion on this? thanks

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • getting another website data to manipulate it's value in perl

    - by aun ali
    use CGI qw(:standard); use CGI::Carp qw(fatalsToBrowser warningsToBrowser); our @a=param(src="http://fxrates.forexpros.com/index.php?pairs_ids=1525=last_update"); print header(),start_html('results.pl'); $a=http://fxrates.forexpros.com/index.php?pairs_ids=1525=last_update; '<iframe frameborder="0" scrolling="no" height="95" width="213" allowtransparency="true" marginwidth="0" marginheight="0" src="http://fxrates.forexpros.com/index.php?pairs_ids=1525;&header-text-color=%23FFFFFF&curr-name-color=%230059b0&inner-text-color=%23000000&green-text-color=%232A8215&green-background=%23B7F4C2&red-text-color=%23DC0001&red-background=%23FFE2E2&inner-border-color=%23CBCBCB&border-color=%23cbcbcb&bg1=%23F6F6F6&bg2=%23ffffff&bid=hide&ask=hide&last=hide&high=hide&low=hide&change=hide&change_in_percents=hide&last_update=show"></iframe><br /><div style="width:213"><span style="float:left"><span style="font-size: 11px;color: #333333;text-decoration: none;">The <a href="http://www.forexpros.com/quotes" target="_blank" style="font-size: 11px;color: #06529D; font-weight: bold;" class="underline_link">Forex Quotes</a> are Powered by Forexpros - The Leading Financial Portal.</span></span></div>'; print '<br/>'; print "$a"; print end_html();' getting error: Can't modify constant item in scalar assignment at livedata.pl line 4, near ""http://fxrates.forexpros.com/index.php?pairs_ids=1525=last_update")" syntax error at livedata.pl line 6, near "http:" Execution of livedata.pl aborted due to compilation errors.

    Read the article

  • php script randomly hangs up

    - by sergdev
    I install php 5 (more precisely 5.3.1) as apache module. After this one of my application becomes randomly hang up on mysql_connect - sometimes works, sometimes no, sometimes reload of page helps. How can this be fixed? I use Windows Vista, Apache/2.2.14 (Win32) PHP/5.3.1 with php module, MySql 5.0.67-community-nt. After a minute I obtain the error message: Fatal error: Maximum execution time of 60 seconds exceeded in path\to\mysqlcon.php on line 9 I run MySql locally and heavy load could not be a reason: SHOW PROCESSLIST shows about 3 process SHOW VARIABLES LIKE 'MAX_CONNECTIONS' is 100. UPDATE: At first I thought that this is connected with mysql_connect. But now I can't say for certain. More difficult thing is when I insert the line to debug: $fh = fopen("E://_debugLog", 'a'); fwrite($fh, __FILE__ . " : " . __LINE__ . "\n"); fclose($fh); script start working near that location as a rule.

    Read the article

  • Spring Dependency Injecting an annotated Aspect

    Using Spring I've had some issues with doing a dependency injection on an annotated Aspect class. CacheService is injected upon the Spring context's startup, but when the weaving takes place, it says that the cacheService is null. So I am forced to relook up the spring context manually and get the bean from there. Is there another way of going about it? Here is an example of my Aspect: import org.apache.log4j.Logger; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import com.mzgubin.application.cache.CacheService; @Aspect public class CachingAdvice { private static Logger log = Logger.getLogger(CachingAdvice.class); private CacheService cacheService; @Around("execution(public *com.mzgubin.application.callMethod(..)) &&" + "args(params)") public Object addCachingToCreateXMLFromSite(ProceedingJoinPoint pjp, InterestingParams params) throws Throwable { log.debug("Weaving a method call to see if we should return something from the cache or create it from scratch by letting control flow move on"); Object result = null; if (getCacheService().objectExists(params))}{ result = getCacheService().getObject(params); } else { result = pjp.proceed(pjp.getArgs()); getCacheService().storeObject(params, result); } return result; } public CacheService getCacheService(){ return cacheService; } public void setCacheService(CacheService cacheService){ this.cacheService = cacheService; } }

    Read the article

  • PHP Object Creation and Memory Usage

    - by JohnO
    A basic dummy class: class foo { var $bar = 0; function foo() {} function boo() {} } echo memory_get_usage(); echo "\n"; $foo = new foo(); echo memory_get_usage(); echo "\n"; unset($foo); echo memory_get_usage(); echo "\n"; $foo = null; echo memory_get_usage(); echo "\n"; Outputs: $ php test.php 353672 353792 353792 353792 Now, I know that PHP docs say that memory won't be freed until it is needed (hitting the ceiling). However, I wrote this up as a small test, because I've got a much longer task, using a much bigger object, with many instances of that object. And the memory just climbs, eventually running out and stopping execution. Even though these large objects do take up memory, since I destroy them after I'm done with each one (serially), it should not run out of memory (unless a single object exhausts the entire space for memory, which is not the case). Thoughts?

    Read the article

  • GridView ObjectDataSource LINQ Paging and Sorting using multiple table query.

    - by user367426
    I am trying to create a pageing and sorting object data source that before execution returns all results, then sorts on these results before filtering and then using the take and skip methods with the aim of retrieving just a subset of results from the database (saving on database traffic). this is based on the following article: http://www.singingeels.com/Blogs/Nullable/2008/03/26/Dynamic_LINQ_OrderBy_using_String_Names.aspx Now I have managed to get this working even creating lambda expressions to reflect the sort expression returned from the grid even finding out the data type to sort for DateTime and Decimal. public static string GetReturnType<TInput>(string value) { var param = Expression.Parameter(typeof(TInput), "o"); Expression a = Expression.Property(param, "DisplayPriceType"); Expression b = Expression.Property(a, "Name"); Expression converted = Expression.Convert(Expression.Property(param, value), typeof(object)); Expression<Func<TInput, object>> mySortExpression = Expression.Lambda<Func<TInput, object>>(converted, param); UnaryExpression member = (UnaryExpression)mySortExpression.Body; return member.Operand.Type.FullName; } Now the problem I have is that many of the Queries return joined tables and I would like to sort on fields from the other tables. So when executing a query you can create a function that will assign the properties from other tables to properties created in the partial class. public static Account InitAccount(Account account) { account.CurrencyName = account.Currency.Name; account.PriceTypeName = account.DisplayPriceType.Name; return account; } So my question is, is there a way to assign the value from the joined table to the property of the current table partial class? i have tried using. from a in dc.Accounts where a.CompanyID == companyID && a.Archived == null select new { PriceTypeName = a.DisplayPriceType.Name}) but this seems to mess up my SortExpression. Any help on this would be much appreciated, I do understand that this is complex stuff.

    Read the article

  • FluentNHibernate error -- "Invalid object name"

    - by goober
    I'm attempting to do the most simple of mappings with FluentNHibernate & Sql2005. Basically, I have a database table called "sv_Categories". I'd like to add a category, setting the ID automatically, and adding the userid and title supplied. Database table layout: CategoryID -- int -- not-null, primary key, auto-incrementing UserID -- uniqueidentifier -- not null Title -- varchar(50) -- not null Simple. My SessionFactory code (which works, as far as I can tell): _SessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2005 .ConnectionString(c => c.FromConnectionStringWithKey("SVTest"))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<CategoryMap>()) .BuildSessionFactory(); My ClassMap code: public class CategoryMap : ClassMap<Category> { public CategoryMap() { Id(x => x.ID).Column("CategoryID").Unique(); Map(x => x.Title).Column("Title").Not.Nullable(); Map(x => x.UserID).Column("UserID").Not.Nullable(); } } My Class code: public class Category { public virtual int ID { get; private set; } public virtual string Title { get; set; } public virtual Guid UserID { get; set; } public Category() { // do nothing } } And the page where I save the object: public void Add(Category catToAdd) { using (ISession session = SessionProvider.GetSession()) { using (ITransaction Transaction = session.BeginTransaction()) { session.Save(catToAdd); Transaction.Commit(); } } } I receive the error Invalid object name 'Category'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'Category'. I think it might be that I haven't told the CategoryMap class to use the "sv_Categories" table, but I'm not sure how to do that. Any help would be appreciated. Thanks!

    Read the article

  • Which is the correct design pattern for my PHP application?

    - by user1487141
    I've been struggling to find good way to implement my system which essentially matches the season and episode number of a show from a string, you can see the current working code here: https://github.com/huddy/tvfilename I'm currently rewriting this library and want a nicer way to implement how the the match happens, currently essentially the way it works is: There's a folder with classes in it (called handlers), every handler is a class that implements an interface to ensure a method called match(); exists, this match method uses the regex stored in a property of that handler class (of which there are many) to try and match a season and episode. The class loads all of these handlers by instantiating each one into a array stored in a property, when I want to try and match some strings the method iterates over these objects calling match(); and the first one that returns true is then returned in a result set with the season and episode it matched. I don't really like this way of doing it, it's kind of hacky to me, and I'm hoping a design pattern can help, my ultimate goal is to do this using best practices and I wondered which one I should use? The other problems that exist are: More than one handler could match a string, so they have to be in an order to prevent the more greedy ones matching first, not sure if this is solvable as some of the regex patterns have to be greedy, but possibly a score system, something that shows a percentage of how likely the match is correct, i'd have no idea how to actually implement this though. I'm not if instantiating all those handlers is a good way of doing it, speed is important, but using best practices and sticking to design patterns to create good, extensible and maintainable code is my ultimate priority. It's worth noting the handler classes sometimes do other things than just regex matching, they sometimes prep the string to be matched by removing common words etc. Cheers for any help Billy

    Read the article

  • Time gaps between host clEnqueue_xxx calls

    - by dialer
    Consider these OpenCL calls (3 memcpy DtoH, 4313 cl_float elements each): clEnqueueReadBuffer(CommandQueue, SpectrumAbsMem, CL_FALSE, 0, SpectrumMemSize, SpectrumAbs, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumReMem, CL_FALSE, 0, SpectrumMemSize, SpectrumRe, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumImMem, CL_FALSE, 0, SpectrumMemSize, SpectrumIm, 0, NULL, NULL); When I analyze these with the NVIDIA visual profiler, I see that the actual memcpy operation only takes 8 us, but there is a significant gap of around 130 us after each memcpy. I'm already using the supposedly asynchronous method (the CL_FALSE in the argument list). When I use only one operation, but with three times the size, the operation is way faster. Why is the time gap between the actual memcpy operations so huge, whereas the gap between the kernel execution (exactly before these three operations) and the first memcpy is only 7us? Can I get rid of it, or do I need to accumulate more data before starting a memcpy? If so, is there a convenient way how I could combine mutliple arrays into a single contiguous block of memory, but still have a cl_mem object as a separate device memory pointer to each section?

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >