Search Results

Search found 11146 results on 446 pages for 'dynamic queries'.

Page 155/446 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • How to use named_scope to incrementally construct a complex query?

    - by wbharding
    I want to use Rails named_scope to incrementally build complex queries that are returned from external methods. I believe I can get the behavior I want with anonymous scopes like so: def filter_turkeys Farm.scoped(:conditions => {:animal => 'turkey'}) end def filter_tasty Farm.scoped(:conditions => {:tasty => true}) end turkeys = filter_turkeys is_tasty = filter_tasty tasty_turkeys = filter_turkeys.filter_tasty But say that I already have a named_scope that does what these anonymous scopes do, can I just use that, rather than having to declare anonymous scopes? Obviously one solution would be to pass my growing query to each of the filter methods, a la def filter_turkey(existing_query) existing_query.turkey # turkey is a named_scoped that filters for turkey end But that feels like an un-DRY way to solve the problem. Oh, and if you're wondering why I would want to return named_scope pieces from methods, rather than just build all the named scopes I need and concatenate them together, it's because my queries are too complex to be nicely handled with named_scopes themselves, and the queries get used in multiple places, so I don't want to manually build the same big hairy concatenation multiple times.

    Read the article

  • Linq - How to collect Anonymous Type as Result for a Function

    - by GibboK
    I use c# 4 asp.net and EF 4. I'm precompiling a query, the result should be a collection of Anonymous Type. At the moment I use this code. public static readonly Func<CmsConnectionStringEntityDataModel, string, dynamic> queryContentsList = CompiledQuery.Compile<CmsConnectionStringEntityDataModel, string, dynamic> ( (ctx, TypeContent) => ctx.CmsContents.Where(c => c.TypeContent == TypeContent & c.IsPublished == true & c.IsDeleted == false) .Select(cnt => new { cnt.Title, cnt.TitleUrl, cnt.ContentId, cnt.TypeContent, cnt.Summary } ) .OrderByDescending(c => c.ContentId)); I suspect the RETURN for the FUNCTION Dynamic does not work properly and I get this error Sequence contains more than one element enter code here. I suppose I need to return for my function a Collection of Anonymous Types... Do you have any idea how to do it? What I'm doing wrong? Please post a sample of code thanks! Update: public class ConcTypeContents { public string Title { get; set; } public string TitleUrl { get; set; } public int ContentId { get; set; } public string TypeContent { get; set; } public string Summary { get; set; } } public static readonly Func<CmsConnectionStringEntityDataModel, string, ConcTypeContents> queryContentsList = CompiledQuery.Compile<CmsConnectionStringEntityDataModel, string, ConcTypeContents>( (ctx, TypeContent) => ctx.CmsContents.Where(c => c.TypeContent == TypeContent & c.IsPublished == true & c.IsDeleted == false) .Select(cnt => new ConcTypeContents { cnt.Title, cnt.TitleUrl, cnt.ContentId, cnt.TypeContent, cnt.Summary }).OrderByDescending(c => c.ContentId));

    Read the article

  • Outgoing Emails (Git Patches) Blocked by Windows Live

    - by SteveStifler
    Just recently I dove into the VideoLAN open source project. This was my first time using git, and when sending in my first patch (using git send-email --to [email protected] patches), I was sent the following message from my computer's local mail in the terminal (I'm on OSX 10.6 by the way): Mail rejected by Windows Live Hotmail for policy reasons. We generally do not accept email from dynamic IP's as they are not typically used to deliver unauthenticated SMTP e-mail to an Internet mail server. http:/www.spamhaus.org maintains lists of dynamic and residential IP addresses. If you are not an email/network admin please contact your E-mail/Internet Service Provider for help. Email/network admins, please visit http://postmaster.live.com for email delivery information and support They must think I'm a spammer. I have a dynamic IP and my ISP (Charter) won't let me get a static one, so I tried editing git preferences: git config --global user.email "[email protected]" to my gmail account. However I got the exact same message again. My guess is that it has something to do with the native mail's preferences, but I have no idea how to access them or modify them. Anybody have any ideas for solving this? Thanks!

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • Why can't you return a List from a Compiled Query?

    - by Andrew
    I was speeding up my app by using compiled queries for queries which were getting hit over and over. I tried to implement it like this: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id) End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, List(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ (From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u).ToList()) This did not work and I got this error message: Type : System.ArgumentNullException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Value cannot be null. Parameter name: value However, when I changed my compiled query to return IQueryable instead of List like so: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id).ToList() End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, IQueryable(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u) It worked fine. Can anyone shed any light as to why this is? BTW, compiled queries rock! They sped up my app by a factor of 2.

    Read the article

  • How to use Enquire.Js?

    - by big_smile
    Enquire.js is a Javascipt that re-creates CSS media queries for Javascripts. This means you can wrap your Javascripts in Media Queries. (Just like you would wrap CSS in Media Queries). I'm not quite sure how to use it. This tutorial says this: enquire.register("max-width: 960px", function() { // put your code here }); However, when I follow that, my code stops working. Here is an example of some Jquery Tabs without Enquire.JS. It works fine. Here are the same tabs, but with Enquire.JS added. Now it stops working. I've experimented with different variations of the code, but none of them work. What am I doing wrong? I think you might have place the Jquery Tab code in a separate file and then link to that file from within Enquire.Js, but I'm not sure how you would do that. (Although it would be handy to know as I'd imagine it would let you keep your scripts be more reusable). PS. This is not a criticism of Enquire.Js. I know that the problem lies squarely with my lack of proficiency in Javascript! I did spend a couple of hours searching for a solution, but couldn't find anything. Thanks for any help!

    Read the article

  • How Best to Replace PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries/reports (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and/or requiring dynamic SQL, and/or procedural logic. PL/SQL is custom made for handling these situations, but as a language it does not compare to C#, and even if it did, it's tooling does not, and if even that did, forcing yet another language on the team is something to be avoided whenever possible. Experience has shown me that using SQL for the set based processing, coupled with C# for the procedural processing, is a powerful combination indeed, and far more readable, maintainable and enhanceable than PL/SQL. So, we end up with a number of small C# programs which typically would construct a SQL query string piece by piece and/or run several queries and process them as needed. This kind of code could easily be a disaster, but a good developer can make this work out quite well, and end up with very readable code. So, I don't think it's a bad way to code for smaller DB focused projects. My main question is, how best to create and package all these little C# programs that run ad hoc queries and reports against the database? Right now I create little report objects in a DLL, developed and tested with NUnit, but then I continue to use NUnit as the GUI to select and run them. NUnit just happens to provide a nice GUI for this kind of thing, even after testing has been completed. I'm also interested in suggestions for reporting apps generally. I feel like there is a missing piece or product. The perfect tool would allow writing and running C# inside of Toad, or SQL inside of Visual Studio, along with simple reporting facilities. All ideas will be appreciated, but let's make the assumption that PL/SQL is not the solution.

    Read the article

  • How would you organize this in asp.net mvc?

    - by chobo
    I have an asp.net mvc 2.0 application that contains Areas/Modules like calendar, admin, etc... There may be cases where more than one area needs to access the same Repo, so I am not sure where to put the Data Access Layers and Repositories. First Option: Should I create Data Access Layer files (Linq to SQL in my case) with their accompanying Repositories for each area, so each area only contains the Tables, and Repositories needed by those areas. The benefit is that everything needed to run that module is one place, so it is more encapsulated (in my mind anyway). The downside is that I may have duplicate queries, because other modules may use the same query. Second Option Or, would it be better to place the DAL and Repositories outside the Area's and treat them as Global? The advantage is I won't have any duplicate queries, but I may be loading a lot of unnecessary queries and DAL tables up for certain modules. It is also more work to reuse or modify these modules for future projects (though the chance of reusing them is slim to none :)) Which option makes more sense? If someone has a better way I'd love to hear it. Thanks!

    Read the article

  • Can I have the gcc linker create a static libary?

    - by Lucas Meijer
    I have a library consisting of some 300 c++ files. The program that consumes the library does not want to dynamically link to it. (For various reasons, but the best one is that some of the supported platforms do not support dynamic linking) Then I use g++ and ar to create a static library (.a), this file contains all symbols of all those files, including ones that the library doesn't want to export. I suspect linking the consuming program with this library takes an unnecessary long time, as all the .o files inside the .a still need to have their references resolved, and the linker has more symbols to process. When creating a dynamic library (.dylib / .so) you can actually use a linker, which can resolve all intra-lib symbols, and export only those that the library wants to export. The result however can only be "linked" into the consuming program at runtime. I would like to somehow get the benefits of dynamic linking, but use a static library. If my google searches are correct in thinking this is indeed not possible, I would love to understand why this is not possible, as it seems like something that many c and c++ programs could benefit from.

    Read the article

  • Fix N+1 query in "declarative_authorization" gem using gem "bullet"

    - by makaroni4
    Currently I am working on one big web application and to make it work faster I decided to refactor all N+1 queries (to decrease number of requests to database, http://rails-bestpractices.com/posts/29-fix-n-1-queries). So I installed gem "bullet" which doesn`t work with Rails 3.1.1 now (you can use fork from https://github.com/flyerhzm/bullet). When using declarative_authorization gem on each page I get same alerts: N+1 Query detected Role => [:permissions] Add to your finder: :include => [:permissions] N+1 Query detected Permission => [:permission_rules] Add to your finder: :include => [:permission_rules] CACHE (0.0ms) SELECT "roles".* FROM "roles" CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 1 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 2 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 3 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 4 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 6 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 7 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 8 CACHE (0.0ms) SELECT "permission_rules".* FROM "permission_rules" INNER JOIN "permission_rules_permissions" ON "permission_rules"."id" = "permission_rules_permissions"."permission_rule_id" WHERE "permission_rules_permissions"."permission_id" = 30 CACHE (0.0ms) SELECT "permission_rules".* FROM "permission_rules" INNER JOIN "permission_rules_permissions" ON "permission_rules"."id" = "permission_rules_permissions"."permission_rule_id" WHERE "permission_rules_permissions"."permission_id" = 31 ... Could you please help me with that and to make this queries faster?

    Read the article

  • Why doesn't this CompiledQuery give a performance improvement?

    - by Grammarian
    I am trying to speed up an often used query. Using a CompiledQuery seemed to be the answer. But when I tried the compiled version, there was no difference in performance between the compiled and non-compiled versions. Can someone please tell me why using Queries.FindTradeByTradeTagCompiled is not faster than using Queries.FindTradeByTradeTag? static class Queries { // Pre-compiled query, as per http://msdn.microsoft.com/en-us/library/bb896297 private static readonly Func<MyEntities, int, IQueryable<Trade>> mCompiledFindTradeQuery = CompiledQuery.Compile<MyEntities, int, IQueryable<Trade>>( (entities, tag) => from trade in entities.TradeSet where trade.trade_tag == tag select trade); public static Trade FindTradeByTradeTagCompiled(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = mCompiledFindTradeQuery(entities, tag); return tradeQuery.FirstOrDefault(); } public static Trade FindTradeByTradeTag(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = from trade in entities.TradeSet where trade.trade_tag == tag select trade; return tradeQuery.FirstOrDefault(); } }

    Read the article

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • Can't delete record via the datacontext it was retrieved from

    - by Antilogic
    I just upgraded one of my application's methods to use compiled queries (not sure if this is relevant). Now I'm getting contradicting error messages when I run the code. This is my method: MyClass existing = Queries.MyStaticCompiledQuery(MyRequestScopedDataContext, param1, param2).SingleOrDefault(); if (existing != null) { MyRequestScopedDataContext.MyClasses.DeleteOnSubmit(existing); } When I run it I get this message: Cannot remove an entity that has not been attached. Note that the compiled query and the DeleteOnSubmit reference the same DataContext. Still I figured I'd humor the application and add an attach command before the DeleteOnSubmit, like so: MyClass existing = Queries.MyStaticCompiledQuery(MyRequestScopedDataContext, param1, param2).SingleOrDefault(); if (existing != null) { MyRequestScopedDataContext.MyClasses.Attach(existing); MyRequestScopedDataContext.MyClasses.DeleteOnSubmit(existing); } BUT... When I run this code, I get a completely different contradictory error message: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported. I'm at a complete loss... Does anyone else have some insight as to why I can't delete a record via the same DataContext I retrieved it from?

    Read the article

  • Is it possible to create multi-tiered WHERE statements in mySQL

    - by Brendan
    I'm currently developing a program that will generate reports based upon lead data. My issue is that I'm running 3 queries for something that I would like to only have to run one query for. For instance, I want to gather data for leads generated in the past day submission_date > (NOW() - INTERVAL 1 DAY) and I would like to find out how many total leads there were, and how many sold leads there were in that timeframe. (sold=1 / sold=0). The issue comes with the fact that this query is currently being done with 2 queries, one with WHEREsold= 1 and one with WHEREsold= 0. This is all well and good, but when I want to generate this data for the past day,week,month,year,and all time I will have to run 10 queries to obtain this data. I feel like there HAS to be a more efficient way of doing this. I know I can create a mySQL function for this, but I don't see how this could solve the problem. Thanks!!

    Read the article

  • Error 1009 after gotoAndStop - stage instance never gets added

    - by ELambda
    I've been through the forums for hours (days?) searching on 1009 errors, but I remain stumped on this. It seems very mysterious and I would LOVE some help if you have any ideas. I have a single .swf that is 7 frames long - each frame represents a different "page" and you can switch pages through a menu widget in the top right corner. The menu widget calls gotoAndPlay( "frame" ). Everything works fine except when I switch from one particular frame to another. Then, during initialization of the new frame (setting some visible properties on various items, in actionscript), I get the dreaded 1009 error on a specific stage instance, a dynamic text instance i_word. Here's what I've tried so far: made sure the actionscript for the new frame starts with a stop() statement before starting initialization - no dice tried changing i_word into a movie_clip instead of dynamic text, made sure it was exported for actionscript - no difference. (I also have 2 other dynamic text instances on the same page that don't seem to cause a problem) added an ENTER_FRAME listener when the new frame is loaded, in case the problem was a timing issue. Put in a big if statement checking if i_word and other instances are not null before proceeding to initialization. It never enters the if, because i_word NEVER gets added. I added trace statements for all instances that are null, and it is the only one. If I remove all references to i_word in my actionscript, everything else is not null, and things go forward. The text for i_word even shows up on the screen in that case. tried renaming i_word - no dice tried deleting the layer i_word was on and adding a new layer - no dice It feels like there is a serious Gremlin in my flash file somewhere. Or maybe I'm missing something obvious. Let me know if you have any ideas...I'd be so grateful. Thank you! Elambda

    Read the article

  • How to set element value dynamically based on for each loop count

    - by user1515918
    Here is snippet of an xsl file that I am trying to make work. I would like to change value for element request-tot-queries in the header based on loop count in the Body. Your help would be greatly appreciated! <HEADER> <request-tot-queries>$Counter</request-tot-queries> </HEADER> <Body> <xsl:for-each select="//Request/Responses/Pooled/ResidenceHistory/Residencies/Residency"> <count><xsl:variable name="counter" select="position()"/></count> <xsl:if test="DateRange/To/Date[@Type!='Present']"> <subject-query> . . . </subject-query> </xsl:if> </xsl:for-each> </Body>

    Read the article

  • redirect web app results to own application

    - by vbNewbie
    Is it possible to redirect a web apps results to a second application? I cannot parse the html source. It contains the javascript functions that execute the queries but all the content is probably server side. I hope this makes sense. The owner has made the script available but I am not sure how this helps. Can I using .net call the site and redirect results perhaps to a file or database? the app accesses one of googles apis and performs searches/queries and returns results which are displayed on the site. Now all the javascript functions that perform these queries are listed in the source but I do not know javascript so it does not make much sense to me. I have used the documentation which uses the oauth protocol to access the api and have implemented that in my web app but it took me nearly a week to get the request token right and now to send requests to the api, sometimes I get one result back and sometimes none. It is frustrating me and the owner of the web app has given use of his script but he says all that happens is that my browser interacts with the google api and not his server. So I thought why not have my web app call his, since his interacts with the API flawlessly and have the results sent to my app to save in a database. I have very little experience here so pardon my ignorance

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • In Castle Windsor, can I register a Interface component and get a proxy of the implementation?

    - by Thiado de Arruda
    Lets consider some cases: _windsor.Register(Component.For<IProductServices>().ImplementedBy<ProductServices>().Interceptors(typeof(SomeInterceptorType)); In this case, when I ask for a IProductServices windsor will proxy the interface to intercept the interface method calls. If instead I do this : _windsor.Register(Component.For<ProductServices>().Interceptors(typeof(SomeInterceptorType)); then I cant ask for windsor to resolve IProductServices, instead I ask for ProductServices and it will return a dynamic subclass that will intercept virtual method calls. Of course the dynamic subclass still implements 'IProductServices' My question is : Can I register the Interface component like the first case, and get the subclass proxy like in the second case?. There are two reasons for me wanting this: 1 - Because the code that is going to resolve cannot know about the ProductServices class, only about the IProductServices interface. 2 - Because some event invocations that pass the sender as a parameter, will pass the ProductServices object, and in the first case this object is a field on the dynamic proxy, not the real object returned by windsor. Let me give an example of how this can complicate things : Lets say I have a custom collection that does something when their items notify a property change: private void ItemChanged(object sender, PropertyChangedEventArgs e) { int senderIndex = IndexOf(sender); SomeActionOnItemIndex(senderIndex); } This code will fail if I added an interface proxy, because the sender will be the field in the interface proxy and the IndexOf(sender) will return -1.

    Read the article

  • Chat app vs REST app - use a thread in an Activity or a thread in a Service?

    - by synic
    In Virgil Dobjanschi's talk, "Developing Android REST client applications" (link here), he said a few things that took me by surprise. Including: Don't run http queries in threads spawned by your activities. Instead, communicate with a service to do them, and store the information in a ContentProvider. Use a ContentObserver to be notified of changes. Always perform long running tasks in a Service, never in your Activity. Stop your Service when you're done with it. I understand that he was talking about a REST API, but I'm trying to make it fit with some other ideas I've had for apps. One of APIs I've been using uses long-polling for their chat interface. There is a loop http queries, most of which will time out. This means that, as long as the app hasn't been killed by the OS, or the user hasn't specifically turned off the chat feature, I'll never be done with the Service, and it will stay open forever. This seems less than optimal. Long question short: For a chat application that uses long polling to simulate push and immediate response, is it still best practice to use a Service to perform the HTTP queries, and store the information in a ContentProvider?

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • How to convert a string to variable name

    - by p1xelarchitect
    I'm loading several external JSON files and want to check if they have been successfully cached before the the next screen is displayed. My strategy is to check the name of the array to see if it an object. My code works perfectly when I check only a single array and hard code the array name into my function. My question is: how can i make this dynamic? not dynamic: (this works) $("document").ready(){ checkJSON("nav_items"); } function checkJSON(){ if(typeof nav_items == "object"){ // success... } } dynamic: (this doesn't work) $("document").ready(){ checkJSON("nav_items"); checkJSON("foo_items"); checkJSON("bar_items"); } function checkJSON(item){ if(typeof item == "object"){ // success... } } here is the a larger context of my code: var loadAttempts = 0; var reloadTimer = false; $("document").ready(){ checkJSON("nav_items"); } function checkJSON(item){ loadAttempts++; //if nav_items exists if(typeof nav_items == "object"){ //if a timer is running, kill it if(reloadTimer != false){ clearInterval(reloadTimer); reloadTimer = false; } console.log("found!!"); console.log(nav_items[1]); loadAttempts = 0; //reset // load next screen.... } //if nav_items does not yet exist, try 5 times, then give up! else { //set a timer if(reloadTimer == false){ reloadTimer = setInterval(function(){checkJSON(nav_items)},300); console.log(item + " not found. Attempts: " + loadAttempts ); } else { if(loadAttempts <= 5){ console.log(item + " not found. Attempts: " + loadAttempts ); } else { clearInterval(reloadTimer); reloadTimer = false; console.log("Giving up on " + item + "!!"); } } } }

    Read the article

  • Questions about shifting from mysql to PDO

    - by Scarface
    Hey guys I have recently decided to switch all my current plain mysql queries performed with php mysql_query to PDO style queries to improve performance, portability and security. I just have some quick questions for any experts in this database interaction tool Will it prevent injection if all statements are prepared? (I noticed on php.net it wrote 'however, if other portions of the query are being built up with unescaped input, SQL injection is still possible' I was not exactly sure what this meant). Does this just mean that if all variables are run through a prepare function it is safe, and if some are directly inserted then it is not? Currently I have a connection at the top of my page and queries performed during the rest of the page. I took a look at PDO in more detail and noticed that there is a try and catch procedure for every query involving a connection and the closing of that connection. Is there a straightforward way to connecting and then reusing that connection without having to put everything in a try or constantly repeat the procedure by connecting, querying and closing? Can anyone briefly explain in layman's terms what purpose a set_exception_handler serves? I appreciate any advice from any more experienced individuals.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >