Search Results

Search found 11146 results on 446 pages for 'dynamic queries'.

Page 378/446 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • How best to implement "favourites" feature? (like favourite products on a data driven website)

    - by ClarkeyBoy
    Hi, I have written a dynamic database driven, object oriented website with an administration frontend etc etc. I would like to add a feature where customers can save items as "favourites", without having to create an account and login, to come back to them later, but I dont know how exactly to go about doing this... I see three options: Log favourites based on IP address and then change these to be logged against an account if the customer then creates an account; Force customers to create an account to be able to use this functionality; Log favourites based on IP address but give users the option to save their favourites under a name they specify. The problem with option 1 is that I dont know much about IP addresses - my Dad thinks they are unique, but I know people have had problems with systems like this. The problem with 1 and 2 is that accounts have not been opened up to customers yet - only administrators can log in at the moment. It should be easy to alter this (no more than a morning or afternoons work) but I would also have to implement usergroups too. The problem with option 3 is that if user A saves a favourites list called "My Favourites", and then user B tries to save a list under this name and it is refused, user B will then be able to access the list saved by user A because they now know it already exists. A solution to this is to password protect lists, but to go to all this effort I may as well implement option 2. Of course I could always use option 4; use an alternative if anyone can suggest a better solution than any of the above options. So has anyone ever done something like this before? If so how did you go about it? What do you recommend (or not recommend)? Many thanks in advance, Regards, Richard

    Read the article

  • Using pam_python in a script running with mod_python

    - by markys
    Hi ! I would like to develop a web interface to allow users of a Linux system to do certain tasks related to their account. I decided to write the backend of the site using Python and mod_python on Apache. To authenticate the users, I thought I could use python_pam to query the PAM service. I adapted the example bundled with the module and got this: # out is the output stream used to print debug def auth(username, password, out): def pam_conv(aut, query_list, user_data): out.write("Query list: " + str(query_list) + "\n") # List to store the responses to the different queries resp = [] for item in query_list: query, qtype = item # If PAM asks for an input, give the password if qtype == PAM.PAM_PROMPT_ECHO_ON or qtype == PAM.PAM_PROMPT_ECHO_OFF: resp.append((str(password), 0)) elif qtype == PAM.PAM_PROMPT_ERROR_MSG or qtype == PAM.PAM_PROMPT_TEXT_INFO: resp.append(('', 0)) out.write("Our response: " + str(resp) + "\n") return resp # If username of password is undefined, fail if username is None or password is None: return False service = 'login' pam_ = PAM.pam() pam_.start(service) # Set the username pam_.set_item(PAM.PAM_USER, str(username)) # Set the conversation callback pam_.set_item(PAM.PAM_CONV, pam_conv) try: pam_.authenticate() pam_.acct_mgmt() except PAM.error, resp: out.write("Error: " + str(resp) + "\n") return False except: return False # If we get here, the authentication worked return True My problem is that this function does not behave the same wether I use it in a simple script or through mod_python. To illustrate this, I wrote these simple cases: my_username = "markys" my_good_password = "lalala" my_bad_password = "lololo" def handler(req): req.content_type = "text/plain" req.write("1- " + str(auth(my_username,my_good_password,req) + "\n")) req.write("2- " + str(auth(my_username,my_bad_password,req) + "\n")) return apache.OK if __name__ == "__main__": print "1- " + str(auth(my_username,my_good_password,sys.__stdout__)) print "2- " + str(auth(my_username,my_bad_password,sys.__stdout__)) The result from the script is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] 1- True Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False but the result from mod_python is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] Error: ('Authentication failure', 7) 1- False Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False I don't understand why the auth function does not return the same value given the same inputs. Any idea where I got this wrong ? Here is the original script, if that could help you. Thanks a lot !

    Read the article

  • iPhone static library Clang/LLVM error: non_lazy_symbol_pointers

    - by Bekenn
    After several hours of experimentation, I've managed to reduce the problem to the following example (C++): extern "C" void foo(); struct test { ~test() { } }; void doTest() { test t; // 1 foo(); // 2 } This is being compiled for iOS devices in XCode 4.2, using the provided Clang compiler (Apple LLVM compiler 3.0) and the iOS 5.0 SDK. The project is configured as a Cocoa Touch Static Library, and "Enable Linking With Shared Libraries" is set to No because I'm building an AIR native extension. The function foo is defined in another external library. (In my actual project, this would be any of the C API functions defined by Adobe for use in AIR native extensions.) When attempting to compile this code, I get back the error: FATAL:incompatible feature used: section type non_lazy_symbol_pointers (must specify "-dynamic" to be used) clang: error: assembler command failed with exit code 1 (use -v to see invocation) The error goes away if I comment out either of the lines marked 1 or 2 above, or if I change the build setting "Enable Linking With Shared Libraries" to Yes. (However, if I change the build setting, then I get multiple ld warning: unexpected srelocation type 9 warnings when linking the library into the final project, and the application crashes when running on the device.) The build error also goes away if I remove the destructor from test. So: Is this a bug in Clang? Am I missing some all-important and undocumented build setting? The interaction between an externally-provided function and a struct with a destructor is very peculiar, to say the least.

    Read the article

  • Why is setting HTML5's CanvasPixelArray values is ridiculously slow and how can I do it faster?

    - by Nixuz
    I am trying to do some dynamic visual effects using the HTML 5 canvas' pixel manipulation, but I am running into a problem where setting pixels in the CanvasPixelArray is ridiculously slow. For example if I have code like: imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ imageData.data[index] = buffer[i]; imageData.data[index + 1] = buffer[i]; imageData.data[index + 2] = buffer[i]; } ctx.putImageData(imageData, 0, 0); Profiling with Chrome reveals, it runs 44% slower than the following code where CanvasPixelArray is not used. tempArray = new Array(500 * 500 * 4); imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ tempArray[index] = buffer[i]; tempArray[index + 1] = buffer[i]; tempArray[index + 2] = buffer[i]; } ctx.putImageData(imageData, 0, 0); My guess is that the reason for this slowdown is due to the conversion between the Javascript doubles and the internal unsigned 8bit integers, used by the CanvasPixelArray. Is this guess correct? Is there anyway to reduce the time spent setting values in the CanvasPixelArray?

    Read the article

  • How to handle activity life cycle involving sockets in Android?

    - by Henrik
    Hello all, I have an Android activity which in turn starts a thread. In the thread I open a persistent TCP socket connection. When the socket connects to the server dynamic data is downloaded. The thread sends messages using Handler-class to the activity when data has been received. Now if the user happens to switch from portrait to landscape mode the activity gets an onDestroy call. At this moment I close the socket and stop the thread. When Android has switched landscape mode it calls onCreate yet again and I have to do a socket re-connect. Also, all of the data the activity received needs to be downloaded once more because the server does not have the ability to know what has been sent before, i.e. there is no "resume" feature. Thus the problem is that there is alot of data which is resent all the time when landscape mode is changed. What are my options here? Should I create a service which handles the socket traffic towards the server thus I always got all the data which the server has sent in the service. Or should I disable landscape mode all together perhaps? Or would my best bet be to rewrite my server which is a VERY BIG job :-) All input is welcome :-) / Henrik

    Read the article

  • How can I exclude pages created from a specific template from the CQ5 dispatcher cache?

    - by Shawn
    I have a specific Adobe CQ5 (5.5) content template that authors will use to create pages. I want to exclude any page that is created from this template from the dispatcher cache. As I understand it currently, the only way I know to prevent caching is to configure dispatcher.any to not cache a particular URL. But in this case, the URL isn't known until a web author uses the template to create a page. I don't want to have to go back and modify dispatcher.any every time a page is created--or at least I want to automate this if there is no other way. I am using IIS for the dispatcher. The reason I don't want to cache the pages is because the underlying JSPs that render the content for these pages produce dynamic content, and the pages don't use querystrings and won't carry authentication headers. The pages will be created in unpredictable directories, so I don't know the URL pattern ahead of time. How can I configure things so that any page that is created from a certain template will be automatically excluded from the dispatcher cache? It seems like CQ ought to have some mechanism to respect HTTP response/caching headers. If the HTTP response headers specify that the response shouldn't be cached, it seems like the dispatcher shouldn't cache it--regardless of what dispatcher.any says. This is the CQ5 documentation I have been referencing.

    Read the article

  • How do i write this jpql query? java

    - by Nitesh Panchal
    Hello, Say i have 5 tables, tblBlogs tblBlogPosts tblBlogPostComment tblUser tblBlogMember BlogId BlogPostsId BlogPostCommentId UserId BlogMemberId BlogTitle BlogId CommentText FirstName UserId PostTitle BlogPostsId BlogId BlogMemberId Now i want to retrieve only those blogs and posts for which blogMember has actually commented. So in short, how do i write this plain old sql :- Select b.BlogTitle, bp.PostTitle, bpc.CommentText from tblBlogs b Inner join tblBlogPosts bp on b.BlogId = bp.BlogId Inner Join tblBlogPostComment bpc on bp.BlogPostsId = bpc.BlogPostsId Inner Join tblBlogMember bm On bpc.BlogMemberId = bm.BlogMemberId Where bm.UserId = 1; As you can see, everything is Inner join, so only that row will be retrieved for which the user has commented on some post of some blog. So, suppose he has joined 3 blogs whose ids are 1,2,3 (The blogs which user has joined are in tblBlogMembers) but the user has only commented in blog 2 (of say BlogPostId = 1). So that row will be retrieved and 1,3 won't as it is Inner Join. How do i write this kind of query in jpql? In jpql, we can only write simple queries like say :- Select bm.blogId from tblBlogMember Where bm.UserId = objUser; Where objUser is supplied using :- em.find(User.class,1); Thus once we get all blogs(Here blogId represents a blog object) which user has joined, we can loop through and do all fancy things. But i don't want to fall in this looping business and write all this things in my java code. Instead, i want to leave that for database engine to do. So, how do i write the above plain sql into jpql? and what type of object the jpql query will return? because i am only selecting few fields from all table. In which class should i typecast the result to? I think i posted my requirement correctly, if i am not clear please let me know. Thanks in advance :).

    Read the article

  • Getting Started with Ruby & Ruby on Rails

    - by JakeTheSnake
    Some background: I'm a jack-of-all traits, one of which is programming. I learned VB6 through Excel and PHP for creating websites and so far it's worked out just fine for me. I'm not CS major or even mathematically inclined - logic is what interests me. Current status: I'm willing to learn new and more powerful languages; my first foray into such a route is learning Ruby. I went to the main Ruby website and did the interactive intro. (by the way, I'm currently getting redirected to google.com when I try the link...it's happening to other websites as well...is my computer infected?) I liked what I learned and wanted to get started using Ruby to create websites. I downloaded InstantRails and installed it; everything so far has been fine - the program starts up just fine, and I can test some Ruby code in the console. However my troubles begin when I try and view a web page with Ruby code present. Lastly, my problem: As in PHP, I can browse to the .php file directly and through using PHP tags and some simple 'echo' statements I can be on my way in making dynamic web pages. However with the InstantRails app working, accessing a .rb or .rhtml page doesn't produce similar results. I made a simple text file named 'test.rb' and put basic HTML tags in there (html, head, body) and the Ruby tags <%= and % with some ruby code inside. The web page actually shows the tags and the code - as if it's all just plain HTML. I take it Ruby isn't parsing the page before it is displayed to the user, but this is where my lack of understanding of the Ruby environment stops me short. Where do I go from here?

    Read the article

  • Format for storing contacts in a database

    - by Gart
    I'm thinking of the best way to store personal contacts in a database for a business application. The traditional and straightforward approach would be to create a table with columns for each element, i.e. Name, Telephone Number, Job title, Address, etc... However, there are known industry standards for this kind of data, like for example vCard, or hCard, or vCard-RDF/XML or even Windows Contacts XML Schema. Utilizing an standard format would offer some benefits, like inter-operablilty with other systems. But how can I decide which method to use? The requirements are mainly to store the data. Search and ordering queries are highly unlikely but possible. The volume of the data is 100,000 records at maximum. My database engine supports native XML columns. I have been thinking to use some XML-based format to store the personal contacts. Then it will be possible to utilize XML indexes on this data, if searching and ordering is needed. Is this a good approach? Which contacts format and schema would you recommend for this?

    Read the article

  • Translating Where() to sql

    - by MBoros
    Hi. I saw DamienG's article (http://damieng.com/blog/2009/06/24/client-side-properties-and-any-remote-linq-provider) in how to map client properties to sql. i ran throgh this article, and i saw great potential in it. Definitely mapping client properties to SQL is an awesome idea. But i wanted to use this for something a bit more complicated then just concatenating strings. Atm we are trying to introduce multilinguality to our Business objects, and i hoped we could leave all the existing linq2sql queries intact, and just change the code of the multilingual properties, so they would actually return the given property in the CurrentUICulture. The first idea was to change these fields to XMLs, and then try the Object.Property.Elements().Where(...), but it got stuck on the Elements(), as it couldnt translate it to sql. I read somewhere that XML fields are actually regarded as strings, and only on the app server they become XElements, so this way the filtering would be on the app server anyways, not the DB. Fair point, it wont work like this. Lets try something else... SO the second idea was to create a PolyGlots table (name taken from http://weblogic.sys-con.com/node/102698?page=0,1), a PolyGlotTranslations table and a Culture table, where the PolyGlots would be referenced from each internationalized property. This way i wanted to say for example: private static readonly CompiledExpression<Announcement, string> nameExpression = DefaultTranslationOf<Announcement> .Property(e => e.Name) .Is(e=> e.NamePolyGlot.PolyGlotTranslations .Where(t=> t.Culture.Code == Thread.CurrentThread.CurrentUICulture.Name) .Single().Value ); now unfortunately here i get an error that the Where() function cannot be translated to sql, what is a bit disappointing, as i was sure it will go through. I guess it is failing, cause the IEntitySet is basically an IEnumerable, not IQueryable, am i right? Is there another way to use the compiledExpressions class to achieve this goal? Any help appreciated.

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Jqplot ajax request

    - by Moozy
    I'm trying to do a dynamic content load for JQplot charts, but something is wrong: this is my javascript code: $(document).ready(function(){ var ajaxDataRenderer = function(url, plot, options) { var ret = null; $.ajax({ // have to use synchronous here, else the function // will return before the data is fetched async: false, url: url, dataType:"json", success: function(data) { ret = data; console.warn(data); } }); return ret; }; // The url for our json data var jsonurl = "getData.php"; var plot1 = $.jqplot('chart1', jsonurl, { title:'Data Point Highlighting', dataRenderer: ajaxDataRenderer, dataRendererOptions: { unusedOptionalUrl: jsonurl }, axes:{ xaxis: { renderer:$.jqplot.DateAxisRenderer, min: '11/01/2012', max: '11/30/2012', tickOptions:{formatString:'%b %#d'}, tickInterval:'5 days' }, yaxis:{ tickOptions:{ formatString:'%.2f' } } }, highlighter: { show: true, sizeAdjust: 7.5 }, cursor: { show: false } }); }); </script> and it is displaying the chart, but it is not displaying the values, looklike its not getting my data. output of: console.warn(data); is: [["11-01-2012",0],["11-02-2012",0],["11-03-2012",0],["11-04-2012",0],["11-05-2012",0],["11-06-2012",0],["11-07-2012",0],["11-08-2012",0],["11-09-2012",0],["11-10-2012",0],["11-11-2012",0],["11-12-2012",0],["11-13-2012",0],["11-14-2012",0],["11-15-2012",2],["11-16-2012",5],["11-17-2012",0],["11-18-2012",1],["11-19-2012",0],["11-20-2012",0],["11-21-2012",0],["11-22-2012",0],["11-23-2012",0],["11-24-2012",0],["11-25-2012",1],["11-26-2012",0],["11-27-2012",0],["11-28-2012",0],["11-29-2012",0],["11-30-2012",0]]

    Read the article

  • Query Access and VB.NET

    - by yae
    Hi all: I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are vinculated with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obatin the price of the main product, but not runs. This query not returns any value SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5));

    Read the article

  • Thread safe lazy contruction of a singleton in C++

    - by pauldoo
    Is there a way to implement a singleton object in C++ that is: Lazily constructed in a thread safe manner (two threads might simultaneously be the first user of the singleton - it should still only be constructed once). Doesn't rely on static variables being constructed beforehand (so the singleton object is itself safe to use during the construction of static variables). (I don't know my C++ well enough, but is it the case that integral and constant static variables are initialized before any code is executed (ie, even before static constructors are executed - their values may already be "initialized" in the program image)? If so - perhaps this can be exploited to implement a singleton mutex - which can in turn be used to guard the creation of the real singleton..) Excellent, it seems that I have a couple of good answers now (shame I can't mark 2 or 3 as being the answer). There appears to be two broad solutions: Use static initialisation (as opposed to dynamic initialisation) of a POD static varible, and implementing my own mutex with that using the builtin atomic instructions. This was the type of solution I was hinting at in my question, and I believe I knew already. Use some other library function like pthread_once or boost::call_once. These I certainly didn't know about - and am very grateful for the answers posted.

    Read the article

  • Best way to split a string by word (SQL Batch separator)

    - by Paul Kohler
    I have a class I use to "split" a string of SQL commands by a batch separator - e.g. "GO" - into a list of SQL commands that are run in turn etc. ... private static IEnumerable<string> SplitByBatchIndecator(string script, string batchIndicator) { string pattern = string.Concat("^\\s*", batchIndicator, "\\s*$"); RegexOptions options = RegexOptions.Compiled | RegexOptions.IgnoreCase | RegexOptions.Multiline; foreach (string batch in Regex.Split(script, pattern, options)) { yield return batch.Trim(); } } My current implementation uses a Regex with yield but I am not sure if it's the "best" way. It should be quick It should handle large strings (I have some scripts that are 10mb in size for example) The hardest part (that the above code currently does not do) is to take quoted text into account Currently the following SQL will incorrectly get split: var batch = QueryBatch.Parse(@"-- issue... insert into table (name, desc) values('foo', 'if the go is on a line by itself we have a problem...')"); Assert.That(batch.Queries.Count, Is.EqualTo(1), "This fails for now..."); I have thought about a token based parser that tracks the state of the open closed quotes but am not sure if Regex will do it. Any ideas!?

    Read the article

  • Mysql Server Optimization

    - by Ish Kumar
    Hi Geeks, We are having serious MySQL(InnoDB) performance issues at a moment when we do: (10-20) insertions on TABLE1 (10-20) updates on TABLE2 Note: Both above operations happens within fraction of a second. And this occurs every few (10-15) minutes. And all online users (approx 400-600) doing read operation on join of TABLE1 & TABLE2 every 1 second. Here is our mysql configuration info: http://docs.google.com/View?id=dfrswh7c_117fmgcmb44 Issues: Lot queries wait and expire later (saw it from phpmyadmin / processes). My poor MySQL server crashes sometimes Questions Q1: Any suggestions to optimize at MySQL level? Q2: I thinking to use persistent connections at application level, is it right? Info Added Later: Database Engine: InnoDB TABLE1 : 400,000 rows (inserting 8,000 daily) & TABLE2: 8,000 rows 1 second query: SELECT b.id, b.user_id, b.description, b.debit, b.created, b.price, u.username, u.email, u.mobile FROM TABLE1 b, TABLE2 u WHERE b.credit = 0 AND b.user_id = u.id AND b.auction_id = "12345" ORDER BY b.id DESC LIMIT 10; // there are few more but they are not so critical. Indexing is good, we are using them wisely. In above query all id's are indexed And TABLE1 has frequent insertions and TABLE2 has frequent updates.

    Read the article

  • Compiled Haskell libraries with FFI imports are invalid when imported into GHCI

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. I'm using sys_siglist because it's present in a standard library on my system, but I don't believe the actual storage used matters (I discovered this while writing a binding to libidn). If it helps, sys_siglist is defined in <signal.h> as: extern __const char *__const sys_siglist[_NSIG]; I thought this type might be the problem, so I also tried wrapping it in a plain C procedure: #include<stdio.h> const char **test_ffi_import() { printf("C think sys_siglist = %X\n", sys_siglist); return sys_siglist; } However, importing that doesn't change the result, and the printf() call prints the same pointer value as show siglist_a. My suspicion is that it's something to do with static and dynamic library loading. Update: somebody in #haskell suggested this might be 64-bit specific; if anybody tries to reproduce it, can you mention your architecture and whether it worked in a comment? Code as follows: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • Update mapping table in Linq

    - by Gary McGill
    I have a table Customers with a CustomerId field, and a table of Publications with a PublicationId field. Finally, I have a mapping table CustomersPublications that records which publications a customer can access - it has two fields: CustomerId field PublicationId. For a given customer, I want to update the CustomersPublications table based on a list of publication ids. I want to remove records in CustomersPublications where the PublicationId is not in the list, and add new records where the PublicationId is in the list but not already in the table. This would be easy in SQL, but I can't figure out how to do it in Linq. For the delete part, I tried: var recordsToDelete = dataContext.CustomersPublications.Where ( cp => (cp.CustomerId == customerId) && ! publicationIds.Contains(cp.PublicationId) ); dataContext.CustomersPublications.DeleteAllOnSubmit(recordsToDelete); ... but that didn't work. I got an error: System.NotSupportedException: Method 'Boolean Contains(Int32)' has no supported translation to SQL So, I tried using Any(), as follows: var recordsToDelete = dataContext.CustomersPublications.Where ( cp => (cp.CustomerId == customerId) && ! publicationIds.Any(p => p == cp.PublicationId) ); ... and this just gives me another error: System.NotSupportedException: Local sequence cannot be used in LINQ to SQL implementation of query operators except the Contains() operator Any pointers? [I have to say, I find Linq baffling (and frustrating) for all but the simplest queries. Better error messages would help!]

    Read the article

  • Issue encondig java->xls

    - by Xerg
    This is not a pure java question and can also be related to HTML I've written a java servlet that queries a database table and shows the result as a html table. The user can also ask to receive the result as an Excel sheet. Im creating the Excel sheet by printing the same html table, but with the content-type of "application/vnd.ms-excel". The Excel file is created fine. The problem is that the tables may contain non-english data so I want to use a UTF-8 encoding. PrintWriter out = response.getWriter(); response.setContentType("application/vnd.ms-excel:ISO-8859-1"); //response.setContentType("application/vnd.ms-excel:UTF-8"); response.setHeader("cache-control", "no-cache"); response.setHeader("Content-Disposition", "attachment; filename=file.xls"); out.print(src); out.flush(); The non-english characters appear as garbage (áéíóú) Also I tried converting to bytes from String byte[] arrByte = src.getBytes("ISO-8859-1"); String result = new String(arrByte, "UTF-8"); But I Still getting garbage, What can I do?. Thanks

    Read the article

  • How to sort objects in a many-to-many relationship in ruby on rails?

    - by Kenji Kina
    I've been trying to deal with this problem for a couple of hours now and haven't been able to come up with a clean solution. It seems I'm not too good with rails... Anyway, I have the following: In code: class Article < ActiveRecord::Base has_many :line_aspects, :dependent => :destroy has_many :aspects, :through => :line_aspects #plus a name field end class LineAspect < ActiveRecord::Base belongs_to :article belongs_to :aspect end class Aspect < ActiveRecord::Base belongs_to :data_type has_many :line_aspects has_many :articles, :through => :line_aspects end Now, what I would like to do, is to sort these in two steps. First list of Articles by their Articles.name, and then inside sort them by Aspect.name (note, not the middleman). For instance, alphabetically (sorry if the notation is not correct): [{ article => 'Apple', line_aspects => [ {:value => 'red'}, #corresponding to the Attribute with :name => 'color' {:value => 'small'} #corresponding to the Attribute with :name => 'shape' ] },{ article => 'Watermelon', line_aspects => [ {:value => 'green'}, #corresponding to the Attribute with :name => 'color' {:value => 'big'} #corresponding to the Attribute with :name => 'shape' ] }] Again, note that these are ordered by the aspect name (color before shape) instead of the specific values of each line (red before green). (NOTE: My intention is to displaye these in a table in the view) I have not found a good way to do this in rails yet (without resorting to N queries). Can anyone tell me a good way to do it?

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • AJAX vs ActiveX/Flash for browser-based game

    - by iconiK
    I have been following the usage of JavaScript for the past few years, and with the release of extremely fast scripting engines (V8, SquirrelFish Extrene, TraceMonkey, etc.) the possibilities of JavaScript have increased dramatically. However, the usage share of Internet Explorer coupled with it's total lack of support for recent standards makes me want to drop a bomb on Microsoft's HQ, as it creates a huge amount of problems for any website. The game will need to be pretty dynamic client-side, with animations and other eye-candy things, but not a full-blown game like those that run directly in the OS using DirectX or OpenGL. However, this might be a little stretch for JavaScript and will certainly feel extremely slow in Internet Explorer (given that the current IE engine can be hundreds of times slower than SFX; gotta see what IE9 will bring), would it be better to just do the whole thing in Flash? I know this means requiring the plug-in AND I have no experience whatsoever with Flash (other than browsing YouTube :P). It also means I can't just output directly from PHP, I would have to use XML or some other format to pass data to it (JSON is directly integrated in JS and PHP can deal with it easily). Another idea would be to provide an alternative interface just for IE, though I don't know how (ActiveX maybe? or with Flash, then why not just provide it to all browsers) or totally not supporting it and requiring the use of other browsers, although this is plain stupid from a business perspective. So here am I, wondering what approach to take and thus asking for your advice. How should I build the client-side? AJAX in all browsers, Flash in all browsers or a mix (AJAX for "modern" browsers and something else for the "grandpa": IE).

    Read the article

  • Setting objects (not users) inactive after period of time in asp.net mvc

    - by bastijn
    This question is mainly to verify my current idea. I have a series of objects which I want to be active for a specified amount of time. For instance objects like ads which are shown in the ad space only for the amount of time bought, objects in search results which should only pop up when active, and frontpage posts which should be set to inactive after a prespecified time. My current idea is to just give those type of objects a StartDate and EndDate and filter the search routines to only show results which fall in the range StartDate < currentDate < EndDate. Is this the normal structure or should there be some sort of auto-routine which routinely checks for objects which are "over-time" and set a property "inactive" to true or something. Seems like this approach is such a hassle since I need a checker which runs say, every 5 minutes, to scan all DB objects. Seems like a bad idea to me. So is the first structure the most commonly used or are there any other options? When searching on google or SO the search queries only return results setting users inactive.

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >