Search Results

Search found 13227 results on 530 pages for 'memory efficiency'.

Page 475/530 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • How to get address of va_arg?

    - by lionbest
    I hack some old C API and i got a compile error with the following code: void OP_Exec( OP* op , ... ) { int i; va_list vl; va_start(vl,op); for( i = 0; i < op->param_count; ++i ) { switch( op->param_type[i] ) { case OP_PCHAR: op->param_buffer[i] = va_arg(vl,char*); // ok it works break; case OP_INT: op->param_buffer[i] = &va_arg(vl,int); // error here break; // ... more here } } op->pexec(op); va_end(vl); } The error with gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9) was: main.c|55|error: lvalue required as unary ‘&’ operand So why exactly it's not possible here to get a pointer to argument? How to fix it? This code is executed very often with different OP*, so i prefer to not allocate extra memory. Is it possible to iterate over va_list knowing only the size of arguments?

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • OpenCL Matrix Multiplication - Getting wrong answer

    - by Yash
    here's a simple OpenCL Matrix Multiplication kernel which is driving me crazy: __kernel void matrixMul( __global int* C, __global int* A, __global int* B, int wA, int wB){ int row = get_global_id(1); //2D Threas ID x int col = get_global_id(0); //2D Threas ID y //Perform dot-product accumulated into value int value; for ( int k = 0; k < wA; k++ ){ value += A[row*wA + k] * B[k*wB+col]; } C[row*wA+col] = value; //Write to the device memory } Where (inputs) A = [72 45 75 61] B = [26 53 46 76] Output I am getting: C = [3942 7236 3312 5472] But the output should be: C = [3943 7236 4756 8611] The problem I am facing here is that for any dimension array the elements of the first row of the resulting matrix is correct. The elements of all the other rows of the resulting matrix is wrong. By the way I am using pyopencl. I don't know what I mistake I am doing here. I have spent the entire day with no luck. Please help me with this

    Read the article

  • How do the operators < and > work with pointers?

    - by Øystein
    Just for fun, I had a std::list of const char*, each element pointing to a null-terminated text string, and ran a std::list::sort() on it. As it happens, it sort of (no pun intended) did not sort the strings. Considering that it was working on pointers, that makes sense. According to the documentation of std::list::sort(), it (by default) uses the operator < between the elements to compare. Forgetting about the list for a moment, my actual question is: How do these (, <, =, <=) operators work on pointers in C++ and C? Do they simply compare the actual memory addresses? char* p1 = (char*) 0xDAB0BC47; char* p2 = (char*) 0xBABEC475; e.g. on a 32-bit, little-endian system, p1 p2 because 0xDAB0BC47 0xBABEC475? Testing seems to confirm this, but I thought it'd be good to put it on StackOverflow for future reference. C and C++ both do some weird things to pointers, so you never really know...

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Can't get screen pixel from a specific full screen game with any language?

    - by user1007059
    Okay, I know this might seem like I'm posting a duplicate question since I've asked something similar like a day ago HOWEVER, if anyone sees any problem with this, please read my question first before judging: Yesterday I tried getting a specific pixel from a fullscreen game in C#. I thought my C# code was faulty but when I tried with multiple full screen games today they all worked except for that specific game. I literally tried with 10 different full screen games, a couple being mmofps, mmorpg, mmotps, regular rpg games, regular shooters, regular action adventure games, etc. I tried with multiple programming languages, and with every game except that specific game I'm dealing with, it returns the pixel color to me like I wanted. So let me explain what I tried: first I tried returning an IntPtr with C# using GetDC(IntPtr.Zero) before invoking GetPixel(int x, int y) and then getting the color out of it. Then I tried using the Robot class in Java and using the getPixelColor(int x, int y) method. I also tried using GetDC(0) to return an HDC object in C++ and then invoking GetPixel(int x, int y) before again extracting the color. These three methods worked EXACTLY the same in every single game except that specific game I was talking about. They returned the pixel perfectly, and extracted the exact same color perfectly. I don't feel it's necessary to tell you the game name or anything, since you probably don't even know it, but what could possibly be causing this malfunction in 1 specific game? PS: The game ALWAYS returns an RGB color of: A = 255, R = 0, G = 0, B = 0. Also, I tried taking a snapshot of the game with the 3 programming languages, and then getting the pixel which actually works in all 3 languages, but since I need to get this pixel every 30 ms, it kind of makes my game lag a bit (+ I think it takes up a lot of memory)

    Read the article

  • Char C question about encoding signed/unsigned.

    - by drigoSkalWalker
    Hi guys. I read that C not define if a char is signed or unsigned, and in GCC page this says that it can be signed on x86 and unsigned in PowerPPC and ARM. Okey, I'm writing a program with GLIB that define char as gchar (not more than it, only a way for standardization). My question is, what about UTF-8? It use more than an block of memory? Say that I have a variable unsigned char *string = "My string with UTF8 enconding ~ çã"; See, if I declare my variable as unsigned I will have only 127 values (so my program will to store more blocks of mem) or the UTF-8 change to negative too? Sorry if I can't explain it correctly, but I think that i is a bit complex. NOTE: Thanks for all answer I don't understand how it is interpreted normally. I think that like ascii, if I have a signed and unsigned char on my program, the strings have diferently values, and it leads to confuse, imagine it in utf8 so.

    Read the article

  • What's the best way to store Logon User information for Web Application?

    - by Morgan Cheng
    I was once in a project of web application developed on ASP.NET. For each logon user, there is an object (let's call it UserSessionObject here) created and stored in RAM. For each HTTP request of given user, matching UserSessoinObject instance is used to visit user state information and connection to database. So, this UserSessionObject is pretty important. This design brings several problems found later: 1) Since this UserSessionObject is cached in ASP.NET memory space, we have to config load balancer to be sticky connection. That is, HTTP request in single session would always be sent to one web server behind. This limit scalability and maintainability. 2) This UserSessionObject is accessed in every HTTP request. To keep the consistency, there is a exclusive lock for the UserSessionObject. Only one HTTP request can be processed at any given time because it must to obtain the lock first. The performance and response time is affected. Now, I'm wondering whether there is better design to handle such logon user case. It seems Sharing-Nothing-Architecture helps. That means long user info is retrieved from database each time. I'm afraid that would hurt performance. Is there any design pattern for long user web app? Thanks.

    Read the article

  • Should I pass a SqlDataReader by reference or not when passing it out to multiple threads.

    - by deroby
    Hi all, being new to c# I've run into this 'conundrum' when passing around a SqlDataReader between different threads. Without going into too much detail, the idea is to have a main thread fetching data from the database (a large recordset) and then have a helper-task run through this record by record and doing some stuff based upon the contents of this. There is no feedback to the recordset, it simply wades through until no records are left. This works fine, but given the nature of the job at hand it should be possible to have this job spread over different threads (CPUs) to maximize throughput (the order of execution is of no significance). The question then becomes, when I pass this recordset in a SqlDataReader, do I have to use ref or not ? It kind of boils down to the question : if I pass the object around without specifying ref, won't it create new copies in memory and have records processed n times ? Or, don't I risk having the record-position being moved forward while not all fields have been fully read yet ? The latter seems more like a 'data racing' issue and probably is covered by the lock()ing mechanism (or not?). My initial take on the problem was that it doesn't really hurt passing the variable using ref, yet as a colleague put it : "you only need ref when you're doing something wrong" =) Additionally using ref restricts me from applying a Using() construction too which isn't very nice either. I thus create a "basic" project that tackles the same approach but without the ref notation. Tests so far show that it works flawlessly on a Core2Duo (2cpu) using any number of threads, yet I'm still a bit wary... What do you experts think about this ? Use ref or not ? You can find the test-project here as it seems I can't upload it to this question directly ?!? ps: it's just a test-project and I'm new to c#, so please be gentle on me when breaking down the code =P

    Read the article

  • Usage of autorelease pools for fetch method

    - by Matthias
    Hi, I'm a little bit confused regarding the autorelease pools when programming for the iPhone. I've read a lot and the oppionions seem to me from "Do-NOT-use" to "No problem to use". My specific problem is, I would like to have a class which encapsulates the SQLite3 Access, so I have for example the following method: -(User*)fetchUserWithId:(NSInteger)userId Now, within this method a SQL query is done and a new user object is created with the data from the database and then returned. Within this DB Access class I don't need this object anymore, so I can do a release, but since the calling method needs it, I would do an autorelease, wouldn't I? So, is it okay to use autorelease here oder would it gain too much memory, if this method is called quite frequently? Some websites say, that the autorelease pool is released first at the end of the application, some say, at every event (e.g. user touches something). If I should not use autorelease, how can I make sure, that the object is released correctly? Can I do a release in the fetch method and hope, that the object is still there until the calling method can do a retain? Thanks for your help! Regards Matthias

    Read the article

  • C++ design related question

    - by Kotti
    Hi! Here is the question's plot: suppose I have some abstract classes for objects, let's call it Object. It's definition would include 2D position and dimensions. Let it also have some virtual void Render(Backend& backend) const = 0 method used for rendering. Now I specialize my inheritance tree and add Rectangle and Ellipse class. Guess they won't have their own properties, but they will have their own virtual void Render method. Let's say I implemented these methods, so that Render for Rectangle actually draws some rectangle, and the same for ellipse. Now, I add some object called Plane, which is defined as class Plane : public Rectangle and has a private member of std::vector<Object*> plane_objects; Right after that I add a method to add some object to my plane. And here comes the question. If I design this method as void AddObject(Object& object) I would face trouble like I won't be able to call virtual functions, because I would have to do something like plane_objects.push_back(new Object(object)); and this should be push_back(new Rectangle(object)) for rectangles and new Circle(...) for circles. If I implement this method as void AddObject(Object* object), it looks good, but then somewhere else this means making call like plane.AddObject(new Rectangle(params)); and this is generally a mess because then it's not clear which part of my program should free the allocated memory. ["when destroying the plane? why? are we sure that calls to AddObject were only done as AddObject(new something).] I guess the problems caused by using the second approach could be solved using smart pointers, but I am sure there have to be something better. Any ideas?

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

  • CUDA - multiple kernels to compute a single value

    - by Roger
    Hey, I'm trying to write a kernel to essentially do the following in C float sum = 0.0; for(int i = 0; i < N; i++){ sum += valueArray[i]*valueArray[i]; } sum += sum / N; At the moment I have this inside my kernel, but it is not giving correct values. int i0 = blockIdx.x * blockDim.x + threadIdx.x; for(int i=i0; i<N; i += blockDim.x*gridDim.x){ *d_sum += d_valueArray[i]*d_valueArray[i]; } *d_sum= __fdividef(*d_sum, N); The code used to call the kernel is kernelName<<<64,128>>>(N, d_valueArray, d_sum); cudaMemcpy(&sum, d_sum, sizeof(float) , cudaMemcpyDeviceToHost); I think that each kernel is calculating a partial sum, but the final divide statement is not taking into account the accumulated value from each of the threads. Every kernel is producing it's own final value for d_sum? Does anyone know how could I go about doing this in an efficient way? Maybe using shared memory between threads? I'm very new to GPU programming. Cheers

    Read the article

  • Uniquing with Existing Core Data Entities

    - by warrenm
    I'm using Core Data to store a lot (1000s) of items. A pair of properties on each item are used to determine uniqueness, so when a new item comes in, I compare it against the existing items before inserting it. Since the incoming data is in the form of an RSS feed, there are often many duplicates, and the cost of the uniquing step is O(N^2), which has become significant. Right now, I create a set of existing items before iterating over the list of (possible) new items. My theory is that on the first iteration, all the items will be faulted in, and assuming we aren't pressed for memory, most of those items will remain resident over the course of the iteration. I see my options thusly: Use string comparison for uniquing, iterating over all "new" items and comparing to all existing items (Current approach) Use a predicate to filter the set of existing items against the properties of the "new" items. Use a predicate with Core Data to determine uniqueness of each "new" item (without retrieving the set of existing items). Is option 3 likely to be faster than my current approach? Do you know of a better way?

    Read the article

  • Android - Where to store generated bitmaps?

    - by Josh
    I've got an app which dynamically generates anywhere from 6 to 100 small bitmaps for the user to move around the screen in a given session. I currently generate them in onCreate and store them to the sd card, so that after an orientation change I can grab them out of external storage and display them again. However, this takes time (the loading) and I'd like to keep the bitmap references around between lifecyle changes for quicker access. My question is, is there a better place to store my generated bitmaps? I was thinking about creating a static storage library in my base activity, something that would only need to be reloaded when the app is completely removed from memory (shutdown, other apps need resources, 30 minute restart, etc). Ideally, I'd like the user to be able to back out to the title screen, click a "Resume" button, and in onCreate I just have access to those resident bitmap references instead of having to load them from storage again. For this reason I don't think Activity.onRetainNonConfigurationInstance is what I need. Alternatively, is there a better way to handle multiple generated bitmaps than what I'm doing or the plan I described?

    Read the article

  • How to append new elements to Xml from stream

    - by Wololo
    I have a method which returns some xml in a memory stream private MemoryStream GetXml() { XmlWriterSettings settings = new XmlWriterSettings(); settings.Indent = true; using (MemoryStream memoryStream = new MemoryStream()) { using (XmlWriter writer = XmlWriter.Create(memoryStream, settings)) { writer.WriteStartDocument(); writer.WriteStartElement("root"); writer.WriteStartElement("element"); writer.WriteString("content"); writer.WriteEndElement(); writer.WriteEndElement(); writer.WriteEndDocument(); writer.Flush(); } return memoryStream; } } In this example the format of the xml will be: <?xml version="1.0" encoding="utf-8"?> <root> <element>content</element> </root> How can i insert a new element under the root e.g <?xml version="1.0" encoding="utf-8"?> <root> <element>content</element> ----->New element here <------ </root>

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • How to format the node_redis info function output?

    - by hh54188
    I want check the Redis info on my pc with node, so I use node_redis and run the info function: var redis = require("redis"), client = redis.createClient(); client.on("connect", function () { client.info(function (err, replay) { console.log(replay); }) }) but the response is un-format: `#Server\r\nredis_version:2.6.16\r\nredis_git_sha1:00000000\r\nredis_git_dirty:0\r\nredis_mode:standalone\r\nos:Linux 3.8.0-29-generic x86_64\r\narch_bits:64\r\nmultiplexing_api:epoll\r\ngcc_version:4.6.3\r\nprocess_id:2941\r\nrun_id:e60f261a6f4f6f081563a47961315eff6b1c005d\r\ntcp_port:6379\r\nuptime_in_seconds:1777\r\nuptime_in_days:0\r\nhz:10\r\nlru_clock:2040689\r\n\r\n# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0\r\n\r\n# Memory\r\nused_memory:562584\r\nused_memory_human:549.40K\r\nused_memory_rss:2031616\r\nused_memory_peak:561784\r\nused_memory_peak_human:548.62K\r\nused_memory_lua:31744\r\nmem_fragmentation_ratio:3.61\r\nmem_allocator:jemalloc-3.2.0\r\n\r\n# Persistence\r\nloading:0\r\nrdb_changes_since_last_save:0\r\nrdb_bgsave_in_progress:0\r\nrdb_last_save_time:1383553917\r\nrdb_last_bgsave_status:ok\r\nrdb_last_bgsave_time_sec:-1\r\nrdb_current_bgsave_time_sec:-1\r\naof_enabled:0\r\naof_rewrite_in_progress:0\r\naof_rewrite_scheduled:0\r\naof_last_rewrite_time_sec:-1\r\naof_current_rewrite_time_sec:-1\r\naof_last_bgrewrite_status:ok\r\n\r\n# Stats\r\ntotal_connections_received:3\r\ntotal_commands_processed:5\r\ninstantaneous_ops_per_sec:0\r\nrejected_connections:0\r\nexpired_keys:0\r\nevicted_keys:0\r\nkeyspace_hits:0\r\nkeyspace_misses:0\r\npubsub_channels:0\r\npubsub_patterns:0\r\nlatest_fork_usec:0\r\n\r\n# Replication\r\nrole:master\r\nconnected_slaves:0\r\n\r\n# CPU\r\nused_cpu_sys:0.13\r\nused_cpu_user:0.19\r\nused_cpu_sys_children:0.00\r\nused_cpu_user_children:0.00\r\n\r\n# Keyspace\r\n' How can I turn it to an object? like: { redis_version:2.6.16, redis_git_sha1:00000000, redis_git_dirty:0, ...... } so that I can read each property's value, get information I need

    Read the article

  • extra new lines with several outputStream.write

    - by Sam
    Hi All, I am writing jsp to export data in excel format to user. An excel could be recieved on the cient side. However, since there's large amount of data, and I don't want to keep it in the server memory and write them at the end. I try to divide them and write serveral times. However, each extra write(..) will cause an extra new lines at the top of the excel worksheet and then the extra data is placed after these new lines. Does anyone know the reasons? The code is something like this: response.setHeader("Content-disposition","attachment;filename=DocuShareSearch.xls"); response.setHeader("Content-Type", "application/octet-stream"); responseContent ="<table><tr><td>12131</td></tr>......."; byte[] responseByte1 = responseContent.getBytes("utf-16"); outputStream.write(responseByte1, 0, responseByte1.length ); responseContent =".....<tr><td>12131</td></tr></table>"; byte[] responseByte2 = responseContent.getBytes("utf-16"); outputStream.write(responseByte2, 0, responseByte2.length ); outputStream.close();

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Howto use predicates in LINQ to Entities for Entity Framework objects

    - by user274947
    I'm using LINQ to Entities for Entity Framework objects in my Data Access Layer. My goal is to filter as much as I can from the database, without applying filtering logic on in-memory results. For that purpose Business Logic Layer passes a predicate to Data Access Layer. I mean Func<MyEntity, bool> So, if I use this predicate directly, like public IQueryable<MyEntity> GetAllMatchedEntities(Func<MyEntity, Boolean> isMatched) { return qry = _Context.MyEntities.Where(x => isMatched(x)); } I'm getting the exception [System.NotSupportedException] --- {"The LINQ expression node type 'Invoke' is not supported in LINQ to Entities."} Solution in that question suggests to use AsExpandable() method from LINQKit library. But again, using public IQueryable<MyEntity> GetAllMatchedEntities(Func<MyEntity, Boolean> isMatched) { return qry = _Context.MyEntities.AsExpandable().Where(x => isMatched(x)); } I'm getting the exception Unable to cast object of type 'System.Linq.Expressions.FieldExpression' to type 'System.Linq.Expressions.LambdaExpression' Is there way to use predicate in LINQ to Entities query for Entity Framework objects, so that it is correctly transformed it into a SQL statement. Thank you.

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • returning reference to a vector from a method and using its public members

    - by memC
    dear experts, I have a vector t_vec that stores references to instances of class Too. The code is shown below. In the main , I have a vector t_vec_2 which has the same memory address as B::t_vec. But when I try to access t_vec_2[0].val1 it gives error val1 not declared. Could you please point out what is wrong? Also, if you know of a better way to return a vector from a method, please let me know! Thanks in advance. class Too { public: Too(); ~Too(){}; int val1; }; Too::Too(){ val1 = 10; }; class B { public: vector<Too*> t_vec; Too* t1; vector<Too*>& get_tvec(); B(){t1 = new Too();}; ~B(){delete t1;}; }; vector<Too*>& B::get_tvec(){ t_vec.push_back(t1); return t_vec; } int main(){ B b; b = B(); vector<Too*>& t_vec_2 = b.get_tvec(); // Getting error std::cout << "\n val1 = " << t_vec_2[0].val1; return 0; }

    Read the article

  • PendingIntent in Widget + TaskKiller

    - by YaW
    Hi, I've developed an Application (called Instant Buttons) and the app has a widget feature. This widget uses PendingIntent for the onClick of the widget. My PendingIntent code is something like this: Intent active = new Intent(context, InstantWidget.class); active.setAction(String.valueOf(appWidgetId)); active.putExtra("blabla", blabla); //Some data PendingIntent actionPendingIntent = PendingIntent.getBroadcast(context, 0, active, 0); actionPendingIntent.cancel(); actionPendingIntent = PendingIntent.getBroadcast(context, 0, active, 0); remoteViews.setOnClickPendingIntent(R.id.button, actionPendingIntent); The onReceive gets the intent and do some stuff with the MediaPlayer class to reproduce a sound. I have reports from some users that the widgets stop working after a while and with some research i've discovered is because the Task Killers. It seems that when you kill the app in the TaskKiller, the PendingIntent is erased from memory, so when you click the widget, it doesn't know what to do. Is there any solution for this? Is my code wrong or something or it's the default behavior of the PendingIntent? Is there something I can use to avoid the TaskKiller to stop my widgets from working?? Greetings.

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >