Search Results

Search found 19966 results on 799 pages for 'datetime query'.

Page 734/799 | < Previous Page | 730 731 732 733 734 735 736 737 738 739 740 741  | Next Page >

  • Java webapp: how to implement a web bug (1x1 pixel)?

    - by NoozNooz42
    In the accepted answer in the following question, a SO regular with 13K+ rep suggests to use a "web bug" (non-cacheable 1x1 img) to be able to track requests in the logs: http://stackoverflow.com/questions/1784893 How can I do this in Java? Basically, I've got two issues: how to make sure the 1x1 image is not cacheable (how to set the header)? how to make sure the query for these 1x1 image will appear in the logs? I'm looking for exact piece of code because I know how to write a .jsp/servlet and I know how to serve an 1x1 image :) My question is really about the exact .jsp/servlet that I should write and how/what needs to be done so that Tomcat logs the request. For example I plan to use the following mapping: <servlet-mapping> <servlet-name>WebBugServlet</servlet-name> <url-pattern>/webbug*</url-pattern> </servlet-mapping> and then use an img tag referencing a "webbug.png" (or .gif), so how do I write the .jsp/servlet? What/where should I look for in the logs?

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • PHP/MySQL - Working with two databases, one shared and one local to an instance of application

    - by Extrakun
    The situation: Using a off-the-shelf PHP application, I have to add in a new module for extra functionality. Today, it is made known that eventually four different instances of the application are to be deployed, but the data from the new functionality is to be shared among those 4 instances. Each instance should still have their own database for users, content and etc. So the data for the new functionality goes into a 'shared' database. The data for the application (user login, content, uploads) go into a 'local' database To make things more complex, the new module I am writing will fetch data from the local DB and the shared DB at the same time. A re-write of the base application will take too long. I only have control over the new module which I am writing. The ideal solution: Is there a way to encapsulate 2 databases into one name using MySQL? I do not wish to switch DB connections or specifically name the DB to query from inside my SQL statements. The application uses a DB wrapper, so I am able to change it somehow so I can invisibly attempt to read/write to two different DB. What is the best way to handle this problem?

    Read the article

  • python can't start a new thread

    - by Giorgos Komnino
    I am building a multi threading application. I have setup a threadPool. [ A Queue of size N and N Workers that get data from the queue] When all tasks are done I use tasks.join() where tasks is the queue . The application seems to run smoothly until suddently at some point (after 20 minutes in example) it terminates with the error thread.error: can't start new thread Any ideas? Edit: The threads are daemon Threads and the code is like: while True: t0 = time.time() keyword_statuses = DBSession.query(KeywordStatus).filter(KeywordStatus.status==0).options(joinedload(KeywordStatus.keyword)).with_lockmode("update").limit(100) if keyword_statuses.count() == 0: DBSession.commit() break for kw_status in keyword_statuses: kw_status.status = 1 DBSession.commit() t0 = time.time() w = SWorker(threads_no=32, network_server='http://192.168.1.242:8180/', keywords=keyword_statuses, cities=cities, saver=MySqlRawSave(DBSession), loglevel='debug') w.work() print 'finished' When the daemon threads are killed? When the application finishes or when the work() finishes? Look at the thread pool and the worker (it's from a recipe ) from Queue import Queue from threading import Thread, Event, current_thread import time event = Event() class Worker(Thread): """Thread executing tasks from a given tasks queue""" def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True self.start() def run(self): '''Start processing tasks from the queue''' while True: event.wait() #time.sleep(0.1) try: func, args, callback = self.tasks.get() except Exception, e: print str(e) return else: if callback is None: func(args) else: callback(func(args)) self.tasks.task_done() class ThreadPool: """Pool of threads consuming tasks from a queue""" def __init__(self, num_threads): self.tasks = Queue(num_threads) for _ in range(num_threads): Worker(self.tasks) def add_task(self, func, args=None, callback=None): ''''Add a task to the queue''' self.tasks.put((func, args, callback)) def wait_completion(self): '''Wait for completion of all the tasks in the queue''' self.tasks.join() def broadcast_block_event(self): '''blocks running threads''' event.clear() def broadcast_unblock_event(self): '''unblocks running threads''' event.set() def get_event(self): '''returns the event object''' return event

    Read the article

  • Understanding many to many relationships and Entity Framework

    - by Anders Svensson
    I'm trying to understand the Entity Framework, and I have a table "Users" and a table "Pages". These are related in a many-to-many relationship with a junction table "UserPages". First of all I'd like to know if I'm designing this relationship correctly using many-to-many: One user can visit multiple pages, and each page can be visited by multiple users..., so am I right in using many2many? Secondly, and more importantly, as I have understood m2m relationships, the User and Page tables should not repeat information. I.e. there should be only one record for each user and each page. But then in the entity framework, how am I able to add new visits to the same page for the same user? That is, I was thinking I could simply use the Count() method on the IEnumerable returned by a LINQ query to get the number of times a user has visited a certain page. But I see no way of doing that. In Linq to Sql I could access the junction table and add records there to reflect added visits to a certain page by a certain user, as many times as necessary. But in the EF I can't access the junction table. I can only go from User to a Pages collection and vice versa. I'm sure I'm misunderstanding relationships or something, but I just can't figure out how to model this. I could always have a Count column in the Page table, but as far as I have understood you're not supposed to design database tables like that, those values should be collected by queries... Please help me understand what I'm doing wrong...

    Read the article

  • Searching a column containing CSV data in a MySQL table for existence of input values

    - by Adarsh R
    Hi, I have a table say, ITEM, in MySQL that stores data as follows: ID FEATURES -------------------- 1 AB,CD,EF,XY 2 PQ,AC,A3,B3 3 AB,CDE 4 AB1,BC3 -------------------- As an input, I will get a CSV string, something like "AB,PQ". I want to get the records that contain AB or PQ. I realized that we've to write a MySQL function to achieve this. So, if we have this magical function MATCH_ANY defined in MySQL that does this, I would then simply execute an SQL as follows: select * from ITEM where MATCH_ANY(FEAURES, "AB,PQ") = 0 The above query would return the records 1, 2 and 3. But I'm running into all sorts of problems while implementing this function as I realized that MySQL doesn't support arrays and there's no simple way to split strings based on a delimiter. Remodeling the table is the last option for me as it involves lot of issues. I might also want to execute queries containing multiple MATCH_ANY functions such as: select * from ITEM where MATCH_ANY(FEATURES, "AB,PQ") = 0 and MATCH_ANY(FEATURES, "CDE") In the above case, we would get an intersection of records (1, 2, 3) and (3) which would be just 3. Any help is deeply appreciated. Thanks

    Read the article

  • Rails3 renders a js.erb template with a text/html content-type instead of text/javascript

    - by Yannis
    Hi, I'm building a new app with 3.0.0.beta3. I simply try to render a js.erb template to an Ajax request for the following action (in publications_controller.rb): def get_pubmed_data entry = Bio::PubMed.query(params[:pmid])# searches PubMed and get entry @publication = Bio::MEDLINE.new(entry) # creates Bio::MEDLINE object from entry text flash[:warning] = "No publication found."if @publication.title.blank? and @publication.authors.blank? and @publication.journal.blank? respond_to do |format| format.js end end Currently, my get_pubmed_data.js.erb template is simply alert('<%= @publication.title %>') The server is responding with the following alert('Evidence for a herpes simplex virus-specific factor controlling the transcription of deoxypyrimidine kinase.') which is perfectly fine except that nothing happen in the browser, probably because the content-type of the response is 'text/html' instead of 'text/javascript' as shown by the response header partially reproduced here: Status 200 Keep-Alive timeout=5, max=100 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html; charset=utf-8 Is this a bug or am I missing something? Thanks for your help!

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • how can I save/keep-in-sync an in-memory graph of objects with the database?

    - by Greg
    Question - What is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Background: That is say I have the classes Node and Relationship, and the application is building up a graph of related objects using these classes. There might be 1000 nodes with various relationships between them. The application needs to query the structure hence an in-memory approach is good for performance no doubt (e.g. traverse the graph from Node X to find the root parents) The graph does need to be persisted however into a database with tables NODES and RELATIONSHIPS. Therefore what is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Ideal requirements would include: build up changes in-memory and then 'save' afterwards (mandatory) when saving, apply updates to database in correct order to avoid hitting any database constraints (mandatory) keep persistence mechanism separate from model, for ease in changing persistence layer if needed, e.g. don't just wrap an ADO.net DataRow in the Node and Relationship classes (desirable) mechanism for doing optimistic locking (desirable) Or is the overhead of all this for a smallish application just not worth it and I should just hit the database each time for everything? (assuming the response times were acceptable) [would still like to avoid if not too much extra overhead to remain somewhat scalable re performance]

    Read the article

  • symfony doctrine build-sql error

    - by user313571
    I have some big problems with symfony and doctrine at the beginning of a new project. I have created database diagram with mysql workbench, inserted the sql into phpmyadmin and then I've tried symfony doctrine:build-schema to generate the YAML schema. It generates a wrong schema (relations don't have on delete/on update) and after this I've tried symfony doctrine:build --sql and symfony doctrine:insert-sql The insert-sql statement generates error (can't create table ... failing query alter table add constraint ....), so I've decided to take a look over the generated sql and I've found out some differences between the sql generated from mysql workbench (which works perfect, including relations) and the sql generated by doctrine. I'll be short from now: I have to tables, EVENT and FORM and a 1 to n relation (each event may have multiple forms) so the correct constraint (generated with workbench) is ALTER TABLE `form` ADD CONSTRAINT `fk_form_event1` FOREIGN KEY (`event_id`) REFERENCES `event` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; doctrine generated statement is: ALTER TABLE event ADD CONSTRAINT event_id_form_event_id FOREIGN KEY (id) REFERENCES form(event_id); It's totally reversed and I am sure here is the error. What should I do? It's also correct like this?

    Read the article

  • Why can't I access the facebook friends list after reopening a session in ios

    - by user1532390
    I am upgrading to the facebook 3.0 sdk for ios. Things went well, until I tried to open an existing session after relaunching the application. I am trying to access the list of friends for the facebook user. if ([[FBSession activeSession] isOpen]) { [request startWithCompletionHandler:^(FBRequestConnection *connection, id result, NSError *error) { //do something here }]; }else{ [[self session] openWithCompletionHandler:^(FBSession *session, FBSessionState status, NSError *error) { if ([self isValid]) { [request startWithCompletionHandler:^(FBRequestConnection *connection, id result, NSError *error) { //log this error we always get NSLog(@"%@",error); //do something else }]; } }]; } However I get this error: Error Domain=com.facebook.sdk Code=5 "The operation couldn’t be completed. (com.facebook.sdk error 5.)" UserInfo=0x1d92ff40 {com.facebook.sdk:ParsedJSONResponseKey={ body = { error = { code = 2500; message = "An active access token must be used to query information about the current user."; type = OAuthException; }; }; code = 400; }, com.facebook.sdk:HTTPStatusCode=400} I've found that if I use the FBSession reauthorize method it allows me to complete the request without error, but it also means I must show UI or switch apps every time we relaunch the application which is unacceptable. Any suggestions on what I should be doing differently?

    Read the article

  • CQRS and email notification

    - by t0PPy
    Reading up on CQRS there is a lot of talk of email notification - i'm wondering where to get the data from. Imagine a senario where one user invites other users to an event. To inform a user that he has been invitet to an event, he is send an email. The concrete mecanics might go like this: A "CreateEvent" command with an associated collection of user to invite, is received by the server. A new meeting aggregate is created and a method "InviteUser" is called for each user that is to be invited. Each time a user is invited to an event, a domain event "UserWasInvitedToEvent" is raised. An email notification sender picks up the domain event and sends out the notification email. Now my question is this: Where do i go for information to include in the email? Say i want to include a description of the event as well as the users name. Since this is CQRS i can't get it thru my domain model; All the properties of the domain objects are private! Should i then query the write side? Or maybe move email notification to a different service entirely? Any thoughts will be much appriciated!

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • selectively show wordpress posts based on category

    - by Andy
    Hi, Currently I'm using the following code as part of sidebar code for Wordpress (the code works fine): <ul class="linklist"> <?php $recentPosts = new WP_Query(); $recentPosts->query('showposts=12'); while ($recentPosts->have_posts()) : $recentPosts->the_post(); ?> <li><a href="<?php the_permalink() ?>" rel="bookmark" title="Link to <?php the_title(); ?>"> <?php the_title(); ?></a> </li> <?php endwhile;?> </ul> It shows the last 12 posts. But what I'm looking for is the following; first check what category the current post (the post that is showing based on the permalink) belongs to, and then only list the latest posts that belong to that same category. What should be edited? Thanks!

    Read the article

  • Representing complex scheduled reoccurance in a database

    - by David Pfeffer
    I have the interesting problem of representing complex schedule data in a database. As a guideline, I need to be able to represent the entirety of what the iCalendar -- ics -- format can represent, but in a database. I'm not actually implementing anything relating to ics, but it gives a good scope of the type of rules I need to be able to model. I need to allow allow representation of a single event or a reoccurring event based on multiple times per day, days of the week, week of a month, month, year, or some combination of those. For example, the third Thursday in November annually, or the 25th of December annually, or every two weeks starting November 2 and continuing until September 8 the following year. I don't care about insertion efficiency but query efficiency is critical. The operation I will be doing most often is providing either a single date/time or a date/time range, and trying to determine if the defined schedule matches any part of the date/time range. Other operations can be slower. For example, given January 15, 2010 at 10:00 AM through January 15, 2010 at 11:00 AM, find all schedules that match at least part of that time. (i.e. a schedule that covers 10:30 - 11:00 still matches.) Any suggestions? I looked at http://stackoverflow.com/questions/1016170/how-would-one-represent-scheduled-events-in-an-rdbms but it doesn't cover the scope of the type of reoccurance rules I'd like to model.

    Read the article

  • Python's asyncore to periodically send data using a variable timeout. Is there a better way?

    - by Nick Sonneveld
    I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map)

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • Extending the .NET type system so the compiler enforces semantic meaning of primitive values in cert

    - by Drew Noakes
    I'm working with geometry a bit at the moment and am converting a lot between degrees and radians. Unfortunately, both of these are represented by double, so there's compile time warning/error if I try to pass a value in degrees where radians are expected. I believe F# has a compile-time solution for this (called units of measure.) I'd like to do something similar in C#. As another example, imagine a SQL library that accepts various query parameters as strings. It'd be good to have a way of enforcing that only clean strings were allowed to be passed in at runtime, and the only way to get a clean string was to pass through some SQL injection attack preventing logic. The obvious solution is to wrap the double/string/whatever in a new type to give it the type information the compiler needs. I'm curious if anyone has an alternative solution. If you do think wrapping is the only/best way, then please go into some of the downsides of the pattern (and any upsides I haven't mentioned too.) I'm especially concerned about the performance of abstracted primitive numeric types on my calculations at runtime.

    Read the article

  • jquery-ui autocomplete with ASP MVC suggestions not displaying

    - by adamnickerson
    I have been trying to get a simple example of the jquery-ui autocomplete to work. I have a controller setup to handle the query, and it returns the json that looks to be in order, but I am getting no suggestions showing up. Here are the js libraries I am including: <script type="text/javascript" language="javascript" src="/Scripts/jquery-1.4.1.js"></script> <script type="text/javascript" language="javascript" src="/Scripts/jquery-ui-1.8.1.custom.min.js"></script> <link href="/Content/jquery-ui-1.8.1.custom.css" rel="stylesheet" type="text/css" /> and here is the javascript and the form tags: <script type="text/javascript"> $(function () { $("#organization").autocomplete({ source: function (request, response) { $.ajax({ url: '/Organization/OrganizationLookup', dataType: "json", data: { limit: 12, q: request.term } }) }, minLength: 2 }); }); </script> <div class="ui-widget"> <label for="organization">Organization: </label> <input id="organization" /> </div> I get back a json response that looks reasonable from my controller: [{"id":"Sector A","value":"Sector A"},{"id":"Sector B","value":"Sector B"},{"id":"Sector C","value":"Sector C"}] id and value seem to be the default naming that autocomplete is looking for. But I get no joy at all. Any thoughts?

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • How to Detect in Windows Registry if user has .Net Framework installed?

    - by Sarah Weinberger
    How do I detect in the Windows Registry if a user has .Net Framework installed? I am not looking for a .Net based solution, as the query is from InnoSetup. I know from reading another post here on Stack Overflow that .Net Framework is an inplace upgrade to 4.0. I already know how to check if a user has version 4.0 installed on the system, namely by checking the following: function FindFramework(): Boolean; var bVer4x0: Boolean; bVer4x0Client: Boolean; bVer4x0Full: Boolean; bSuccess: Boolean; iInstalled: Cardinal; begin Result := False; bVer4x0Client := False; bVer4x0Full := False; bVer4x0 := RegKeyExists(HKLM, 'SOFTWARE\Microsoft\.NETFramework\policy\v4.0'); bSuccess := RegQueryDWordValue(HKLM, 'Software\Microsoft\NET Framework Setup\NDP\v4 \Client', 'Install', iInstalled); if (1 = iInstalled) AND (True = bSuccess) then bVer4x0Client := True; bSuccess := RegQueryDWordValue(HKLM, 'Software\Microsoft\NET Framework Setup\NDP\v4 \Full', 'Install', iInstalled); if (1 = iInstalled) AND (True = bSuccess) then bVer4x0Full := True; if (True = bVer4x0Full) then begin Result := True; end; end; I checked the registry and there is no v4.5 folder, which makes sense if .Net Framework 4.5 is an inplace upgrade. Still, the Control Panel Programs and Features includes the listing. I know that probably "issuing dotNetFx45_Full_setup.exe /q" will have no bad effect if installing on a system that already has version 4.5, but I still would like to not install the upgrade if the upgrade already exists, faster and less problems.

    Read the article

  • Convert Google Analytics cookies to Local/Session Storage

    - by David Murdoch
    Google Analytics sets 4 cookies that will be sent with all requests to that domain (and ofset its subdomains). From what I can tell no server actually uses them directly; they're only sent with __utm.gif as a query param. Now, obviously Google Analytics reads, writes and acts on their values and they will need to be available to the GA tracking script. So, what I am wondering is if it is possible to: rewrite the __utm* cookies to local storage after ga.js has written them delete them after ga.js has run rewrite the cookies FROM local storage back to cookie form right before ga.js reads them start over Or, monkey patch ga.js to use local storage before it begins the cookie read/write part. Obviously if we are going so far out of the way to remove the __utm* cookies we'll want to also use the Async variant of Analytics. I'm guessing the down vote was because I didn't ask a question. DOH! My questions are: Can it be done as described above? If so, why hasn't it been done? I have a default HTML/CSS/JS boilerplate template that passes YSlow, PageSpeed, and Chrome's Audit with near perfect scores. I'm really looking for a way to squeeze those remaining cookie bytes from Google Analytics in browsers that support local storage.

    Read the article

  • Zend_Db_Select: regrouping conditions in where clause

    - by pvledoux
    Hi, I would like to do something like this: $select = $myTbl->select() ->from('download_log') ->joinLeft(...... etc........ ->joinLeft(...... etc........ ->joinLeft(...... etc........); //Filter all configured bots (Google, Yahoo, etc.) if(isset($this->_config->statistics->bots)){ $bots = explode(',',$this->_config->statistics->bots); foreach ($bots as $bot){ $select = $select->orWhere("user_agent NOT LIKE '%$bot%'"); } } $select = $select->where("download_log.download_log_ts BETWEEN '".$start_date." 00:00:00' AND '".$end_date." 23:59:59'"); But the outputed query is not correct because of the orWhere clauses are not grouped together in a unique AND clause. I would like to know if it is possible to regrouped those OR clauses in a pair of parentheres. My current alternative is the following: //Filter all configured bots (Google, Yahoo, etc.) if(isset($this->_config->statistics->bots)){ $bots = explode(',',$this->_config->statistics->bots); foreach ($bots as $bot){ $stmt .= "user_agent NOT LIKE '%$bot%' OR "; } $stmt = substr($stmt,0,strlen($stmt)-3); //remove the last OR $select = $select->where("($stmt)"); } Thanks!

    Read the article

  • Can I detect whether an object has called GC.SuppressFinalize?

    - by Joe White
    Is there a way to detect whether or not an object has called GC.SuppressFinalize? I have an object that looks something like this (full-blown Dispose pattern elided for clarity): public class ResourceWrapper { private readonly bool _ownsResource; private readonly UnmanagedResource _resource; public ResourceWrapper(UnmanagedResource resource, bool ownsResource) { _resource = resource; _ownsResource = ownsResource; if (!ownsResource) GC.SuppressFinalize(this); } ~ResourceWrapper() { if (_ownsResource) // clean up the unmanaged resource } } If the ownsResource constructor parameter is false, then the finalizer will have nothing to do -- so it seems reasonable (if a bit quirky) to call GC.SuppressFinalize right from the constructor. However, because this behavior is quirky, I'm very tempted to note it in an XML doc comment... and if I'm tempted to comment it, then I ought to write a unit test for it. But while System.GC has methods to set an object's finalizability (SuppressFinalize, ReRegisterForFinalize), I don't see any methods to get an object's finalizability. Is there any way to query whether GC.SuppressFinalize has been called on a given instance, short of buying Typemock or writing my own CLR host?

    Read the article

< Previous Page | 730 731 732 733 734 735 736 737 738 739 740 741  | Next Page >