Search Results

Search found 26798 results on 1072 pages for 'difference between detach attach and restore backup a db'.

Page 481/1072 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • jquery - array problem help pls.

    - by russp
    Sorry folks, I really need help with posting an array problem. I would imaging it's quite simple, but beyond me. I have this JQuery function (using sortables) $(function() { $("#col1, #col2, #col3, #col4").sortable({ connectWith: '.column', items: '.portlet:not(.ui-state-disabled)', stop : function () { serial_1 = $('#col1').sortable('serialize'); serial_2 = $('#col2').sortable('serialize'); serial_3 = $('#col3').sortable('serialize'); serial_4 = $('#col4').sortable('serialize'); } }); }); Now I can post it to a database like this, and I can loop this ajax through all 4 "serials" $.ajax({ url: "test.php", type: "post", data: serial_1, error: function(){ alert(testit); } }); But that is not what I want to do as it creates 4 rows in the DB table. I want/need to create a single "nested array" from the 4 serials so that it enters the DB as 1 (one) row. My "base" database data looks like this: a:4:{s:4:"col1";a:3:{i:1;s:6:"forums";i:2;s:4:"chat";i:3;s:5:"blogs";}s:4:"col2";a:2:{i:1;s:5:"pages";i:2;s:7:"members";}s:4:"col3";a:2:{i:1;s:9:"galleries";i:2;s:4:"shop";}s:4:"col4";a:1:{i:1;s:4:"news";}} Therefore the JQuery array should "replicate" and create it (obviously will change on sorting) Help please thanks in advance

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • Auto filling polymorphic table on save or on delete in django

    - by Mo J. Mughrabi
    Hi, Am working on an project in which I made an app "core" it will contain some of the reused models across my projects, most of those are polymorphic models (Generic content types) and will be linked to different models. Example below am trying to create audit model and will be linked to several models which may require auditing. This is the polls/models.py from django.db import models from django.contrib.auth.models import User from core.models import * from django.contrib.contenttypes import generic class Poll(models.Model): ## TODO: Document question = models.CharField(max_length=300) question_slug=models.SlugField(editable=False) start_poll_at = models.DateTimeField(null=True) end_poll_at = models.DateTimeField(null=True) is_active = models.BooleanField(default=True) audit_obj=generic.GenericRelation(Audit) def __unicode__(self): return self.question class Choice(models.Model): ## TODO: Document choice = models.CharField(max_length=200) poll=models.ForeignKey(Poll) audit_obj=generic.GenericRelation(Audit) class Vote(models.Model): ## TODO: Document choice=models.ForeignKey(Choice) Ip_Address=models.IPAddressField(editable=False) vote_at=models.DateTimeField("Vote at", editable=False) here is the core/modes.py from django.db import models from django.contrib.auth.models import User from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes import generic class Audit(models.Model): ## TODO: Document # Polymorphic model using generic relation through DJANGO content type created_at = models.DateTimeField("Created at", auto_now_add=True) created_by = models.ForeignKey(User, db_column="created_by", related_name="%(app_label)s_%(class)s_y+") updated_at = models.DateTimeField("Updated at", auto_now=True) updated_by = models.ForeignKey(User, db_column="updated_by", null=True, blank=True, related_name="%(app_label)s_%(class)s_y+") content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField(unique=True) content_object = generic.GenericForeignKey('content_type', 'object_id') and here is polls/admin.py from django.core.context_processors import request from polls.models import Poll, Choice from core.models import * from django.contrib import admin class ChoiceInline(admin.StackedInline): model = Choice extra = 3 class PollAdmin(admin.ModelAdmin): inlines = [ChoiceInline] admin.site.register(Poll, PollAdmin) Am quite new to django, what am trying to do here, insert a record in audit when a record is inserted in polls and then update that same record when a record is updated in polls.

    Read the article

  • Use Django ORM as standalone [closed]

    - by KeyboardInterrupt
    Possible Duplicates: Use only some parts of Django? Using only the DB part of Django I want to use the Django ORM as standalone. Despite an hour of searching Google, I'm still left with several questions: Does it require me to set up my Python project with a setting.py, /myApp/ directory, and modules.py file? Can I create a new models.py and run syncdb to have it automatically setup the tables and relationships or can I only use models from existing Django projects? There seems to be a lot of questions regarding PYTHONPATH. If you're not calling existing models is this needed? I guess the easiest thing would be for someone to just post a basic template or walkthrough of the process, clarifying the organization of the files e.g.: db/ __init__.py settings.py myScript.py orm/ __init__.py models.py And the basic essentials: # settings.py from django.conf import settings settings.configure( DATABASE_ENGINE = "postgresql_psycopg2", DATABASE_HOST = "localhost", DATABASE_NAME = "dbName", DATABASE_USER = "user", DATABASE_PASSWORD = "pass", DATABASE_PORT = "5432" ) # orm/models.py # ... # myScript.py # import models.. And whether you need to run something like: django-admin.py inspectdb ... (Oh, I'm running Windows if that changes anything regarding command-line arguments.).

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • Is there a methode to linarize a Document?

    - by M.R.
    A webservice response with a message which is not linarized. This produces a problem, when trying to access a (as an example) the root element with getSOAPBody().getFirstChlid(). In a linarized document this call would return the first element inside the the body. If the message is not properly formated, you may get the the line break between the soap body and the first element. The problem should be easy to solve with a recursive method, but I was wondering, if there is a method for it? Like normalize etc. Edit: XML Response: ... XMLSchema-instance"><soapenv:Body> <wst:RequestSecurityTokenResponse xmlns:wst="http://schemas.xmlsoap.org/ws/200/02/trust">... JAVA CODE: final DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); DocumentBuilder db = factory.newDocumentBuilder(); Document result = db.newDocument(); //messResult is the response result.appendChild(result.importNode(messResult.getSOAPBody().getFirstChild(),true)); Error Log: HIERARCHY_REQUEST_ERR: An attempt was made to insert a node where it is not permitted.

    Read the article

  • Using Hibernate with MS ACCESS 2007 Database (Free JDBC Driver)

    - by Quentin T.
    1. I want to do a reverse engineering action with the Hibernate plugin of Eclipse on a MS Access 2007 Database. I'm forced to use a existing MS Access 2007 db. A easy solution is to buy the HXTT. But I want to use a free driver to do my work. So I tried to apply this post : http://www.programmingforfuture.com/2011/06/how-to-use-ms-access-with-hibernate.html (That uses the SQL Server dialect and the driver sun.jdbc.odbc.JdbcOdbcDriver) Unfortunately I have an error that nobody seems to have been on the internet: Exception while generating code Reason : org.hibernate.exception.GenericJDBCException: Error while reading primary key meta data for `c:/myaccessdb.mdb`.TableTest1 I have try to change the primary key on my MS Access DB (deleting all primary key) or to try the reverse engineering on a MS ACCESS with only one table without primary key, but I got all times the problems. 2. The purpose of my job is to transfer daily (weekly) an Oracle 11g database with data from an existing database MS ACCESS 2007. And I thought to use a procedure (Hibernate EJB) Java to be launched automatically every week to do the data transfer. Is this is the best solution ? Configuration : sun.jdbc.odbc.JdbcOdbcDriver v??? Hibernate v3.4 Eclipse ps: If you are a HXTT developer or seller please be indulgent with my post ;). Making money by making people believe that you help, it's bad ! A solution is to use Derby Client driver, as the solution in the post: Does anyone know if Hibernate and java will work effectively with Access? But a clarification of the answer of Rich Seller is required. Could you explain your answer and explain your configuration (hibernate.cfg.xml, persistence.xml and what URL you use in the property name="hibernate.connection.url") without using paying HXTT driver but with the free Derby driver.

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • OpenID Authentication using AuthLogic Error

    - by Steve
    Hi, I am trying to implement openid authentication using authlogic. I have installed the open_id_authentication in the process but when I entered rake open_id_authentication:db:create --trace I get the following error (in /Users/felix/login) rake aborted! Don't know how to build task 'open_id_authentication:db:create' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:1728:in `[]' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2050:in `invoke_task' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `block (2 levels) in top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `each' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2029:in `block in top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2001:in `block in run' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/local/lib/ruby/gems/1.9.1/gems/rake-0.8.7/bin/rake:31:in `<top (required)>' /usr/local/bin/rake:19:in `load' /usr/local/bin/rake:19:in `<main>' Can someone tell what am i doing incorrectly Thanks

    Read the article

  • Debugging an XBAP application with 64-bit browser

    - by Anne Schuessler
    We have an XBAP application that fails when opened in Internet Explorer 8 64 bit. We only get a pretty generic error which makes it hard to determine where the error is coming from. I'm trying to find a way to debug the application with IE 8 64 bit, but I haven't figured out how to do this. I can't set the 64 bit version as the standard browser and overwriting the browser path in the browsers.xml for Visual Studio doesn't work as well. It just gets overwritten as soon as I hit F5 to debug to point to the 32 bit IE. I have figured out how to start the application from Debug with the 64 bit browser by changing the Debug options from "Start browser with URL" to "Start external program" and setting the command line arguments to point to the bin folder. Unfortunately then the XBAP is looking for its config.deploy file which doesn't seem to be generated during regular debug. This doesn't happen when using "Start browser with URL" and the application doesn't seem to care for this file then. Does anybody know why there's a difference between "Start browser with URL" and "Start external program" in the Debug options which might cause this difference in behavior when Debug is started? Also, does anybody know how to successfully debug an XBAP with a 64-bit browser?

    Read the article

  • Cache data in SQL CE database

    - by user93422
    Background I have an SQL CE database, that is constantly updated (every second). I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window. And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that. Details SQL CE dabatase is exposed through WCF web service. Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient. It is a low-traffic application, so I don't expect more than few (5) connections at the same time. Loosing snapshot is not a big deal, user can simply generate new one. database is accessed by self-hosted WCF web service using Linq-to-SQL. Web site is ASP.NET MVC hosted on UltiDev Cassini. database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound. Problem I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download. Solution 1: Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself. Solution 2: Cache the snapshot in-memory on either DB server, or web server. Question: Is there anything wrong with proposed solutions? Any different solution suggestions?

    Read the article

  • UTF-8 HTML and CSS files with BOM (and how to remove the BOM with Python)

    - by Cameron
    First, some background: I'm developing a web application using Python. All of my (text) files are currently stored in UTF-8 with the BOM. This includes all my HTML templates and CSS files. These resources are stored as binary data (BOM and all) in my DB. When I retrieve the templates from the DB, I decode them using template.decode('utf-8'). When the HTML arrives in the browser, the BOM is present at the beginning of the HTTP response body. This generates a very interesting error in Chrome: Extra <html> encountered. Migrating attributes back to the original <html> element and ignoring the tag. Chrome seems to generate an <html> tag automatically when it sees the BOM and mistakes it for content, making the real <html> tag an error. So, using Python, what is the best way to remove the BOM from my UTF-8 encoded templates (if it exists -- I can't guarantee this in the future)? For other text-based files like CSS, will major browsers correctly interpret (or ignore) the BOM? They are being sent as plain binary data without .decode('utf-8'). Note: I am using Python 2.5. Thanks!

    Read the article

  • Why notify listeners in a content provider query method?

    - by cbrulak
    Vegeolla has this blog post about content providers and the snippet below (at the bottom) with this line: cursor.setNotificationUri(getContext().getContentResolver(), uri); I'm curious as to why one would want to notify listeners about a query operation. Am I missing something? Thanks @Override public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { // Uisng SQLiteQueryBuilder instead of query() method SQLiteQueryBuilder queryBuilder = new SQLiteQueryBuilder(); // Check if the caller has requested a column which does not exists checkColumns(projection); // Set the table queryBuilder.setTables(TodoTable.TABLE_TODO); int uriType = sURIMatcher.match(uri); switch (uriType) { case TODOS: break; case TODO_ID: // Adding the ID to the original query queryBuilder.appendWhere(TodoTable.COLUMN_ID + "=" + uri.getLastPathSegment()); break; default: throw new IllegalArgumentException("Unknown URI: " + uri); } SQLiteDatabase db = database.getWritableDatabase(); Cursor cursor = queryBuilder.query(db, projection, selection, selectionArgs, null, null, sortOrder); // Make sure that potential listeners are getting notified cursor.setNotificationUri(getContext().getContentResolver(), uri); return cursor; }

    Read the article

  • Calculating rotation in > 360 deg. situations

    - by danglebrush
    I'm trying to work out a problem I'm having with degrees. I have data that is a list of of angles, in standard degree notation -- e.g. 26 deg. Usually when dealing with angles, if an angle exceeds 360 deg then the angle continues around and effectively "resets" -- i.e. the angle "starts again", e.g. 357 deg, 358 deg, 359 deg, 0 deg, 1 deg, etc. What I want to happen is the degree to continue increasing -- i.e. 357 deg, 358 deg, 359 deg, 360 deg, 361 deg, etc. I want to modify my data so that I have this converted data in it. When numbers approach the 0 deg limit, I want them to become negative -- i.e. 3 deg, 2 deg, 1 deg, 0 deg, -1 deg, -2 deg, etc. With multiples of 360 deg (both positive and negative), I want the degrees to continue, e.g. 720 deg, etc. Any suggestions on what approach to take? There is, no doubt, a frustratingly simple way of doing this, but my current solution is kludgey to say the least .... ! My best attempt to date is to look at the percentage difference between angle n and angle n - 1. If this is a large difference -- e.g. 60% -- then this needs to be modified, by adding or subtracting 360 deg to the current value, depending on the previous angle value. That is, if the previous angle is negative, substract 360, and add 360 if the previous angle is positive. Any suggestions on improving this? Any improvements?

    Read the article

  • jQuery code in ajax loaded content only runs once

    - by Michael Howland
    I have been looking around SO for a while and haven't been able to find anything that matches my issue, which I'm not even sure I can explain that well, so take that for what it's worth. I have a page that loads content into a div via AJAX (using the .load() method). There are several links in the navigation, meaning the content will change while navigating the site without refreshing the entire page. (Actually, to be honest, I just cribbed the DocTemplate layout [http://css-tricks.com/examples/DocTemplate/] from css-tricks.com. Apparently while I'm not a re-invent the wheel type programmer, I am a bash my head against the wheel incessantly to get it to work programmer.) So, index.php loads up some DB content in a div. There is also a jQuery UI modal input form on index.php. Essentially, the only HTML on the page is an empty div and a form. This all works fine, until I call up another page, then go back to index.php. The DB content is not loaded, and my form is shown there in all its naked glory. I know why this is happening. The page was not refreshed, nothing kicked off the code to load the content and hide the form. My question is, how can I ensure that the AJAX .load() and the .dialog() will run when loading index.php again? Is it even possible? Thanks, and my apologies for the length. I get verbose when I'm confused.

    Read the article

  • Fastest inline-assembly spinlock

    - by sigvardsen
    I'm writing a multithreaded application in c++, where performance is critical. I need to use a lot of locking while copying small structures between threads, for this I have chosen to use spinlocks. I have done some research and speed testing on this and I found that most implementations are roughly equally fast: Microsofts CRITICAL_SECTION, with SpinCount set to 1000, scores about 140 time units Implementing this algorithm with Microsofts InterlockedCompareExchange scores about 95 time units Ive also tried to use some inline assembly with __asm {} using something like this code and it scores about 70 time units, but I am not sure that a proper memory barrier has been created. Edit: The times given here are the time it takes for 2 threads to lock and unlock the spinlock 1,000,000 times. I know this isn't a lot of difference but as a spinlock is a heavily used object, one would think that programmers would have agreed on the fastest possible way to make a spinlock. Googling it leads to many different approaches however. I would think this aforementioned method would be the fastest if implemented using inline assembly and using the instruction CMPXCHG8B instead of comparing 32bit registers. Furthermore memory barriers must be taken into account, this could be done by LOCK CMPXHG8B (I think?), which guarantees "exclusive rights" to the shared memory between cores. At last [some suggests] that for busy waits should be accompanied by NOP:REP that would enable Hyper-threading processors to switch to another thread, but I am not sure whether this is true or not? From my performance-test of different spinlocks, it is seen that there is not much difference, but for purely academic purpose I would like to know which one is fastest. However as I have extremely limited experience in the assembly-language and with memory barriers, I would be happy if someone could write the assembly code for the last example I provided with LOCK CMPXCHG8B and proper memory barriers in the following template: __asm { spin_lock: ;locking code. spin_unlock: ;unlocking code. }

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • Find records IN BETWEEN Date Range

    - by Muhammad Kashif Nadeem
    Please see attached image I have a table which have FromDate and ToDate. FromDate is start of some event and ToDate is end of taht event. I need to find a record if search criteria is in between range of dates. e.g. If a record has FromDate 2010/15/5 and ToDate 2010/15/25 and my criteria is FromDate 2010/5/18 and ToDate is 2010/5/21 then this record should be in search results becasue this is in the range of 15 to 25. Following is my search query (chunk of) SELECT m.EventId FROM MajorEvents WHERE ( (m.LocationID = @locationID OR @locationID IS NULL) OR M.LocationID IS NULL) AND ( CONVERT(VARCHAR(10),M.EventDateFrom,23) BETWEEN CONVERT(VARCHAR(10),@DateTimeFrom,23) AND CONVERT(VARCHAR(10),@DateTimeTo,23) OR CONVERT(VARCHAR(10),M.EventDateTo,23) BETWEEN CONVERT(VARCHAR(10),@DateTimeFrom,23) AND CONVERT(VARCHAR(10),@DateTimeTo,23) ) If Search Criteria is equal to FromDate or ToDate then results are ok e.g. If search criterai is DateFrom = 2010/5/15 AND DateTo = 2010/5/18 then this record will return becasue Date From is exactly what is DateFrom in db. OR If search criterai is DateFrom = 2010/5/22 AND DateTo = 2010/5/25 then this record will return becasue Date To is exactly what is DateTo in db But if anything in between this range it does not work Thanks for the help.

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

  • This pagination script is doodoo - I need a better one!

    - by ClarkSKent
    Hello, I was looking at the pagination script (posted below) and found it to be gros,s and not very good at all especially when trying to customize it. This is what the main page looks like: <?php include('config.php'); $per_page = 9; //Calculating no of pages $sql = "select * from messages"; $result = mysql_query($sql); $count = mysql_num_rows($result); $pages = ceil($count/$per_page) ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/ libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" ></div> <ul id="pagination"> <?php //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li id="'.$i.'">'.$i.'</li>'; } ?> </ul> The top part of the code gets the results from the mysql db and than uses this information to display the numbers in the body of this page. I am trying to put something like this on a separate page like count_page.php and then just include it. I guess my question is, if there is a better way of doing the above with better structure. A better way to go through the db and count the results and display the appropriate numbers. The above seems messy. Thanks for any help or suggestions on this.

    Read the article

  • Extract the vb code associated with a macro attached to an Action button in PowerPoint

    - by Patricker
    I have about 25 PowerPoint presentations, each with at least 45 slides. On each slide is a question with four possible answers and a help button which provides a hint relevant to the question. Each of the answers and the help button is a PowerPoint Action button that launches a macro. I am attempting to migrate all the questions/answers/hints into a SQL Database. I've worked with Office.Interop before when working with Excel and Word and I have plenty of SQL DB experience, so I don't foresee any issues with actually extracting the text portion of the question and answer and putting it into the db. What I have no idea how to do is given an object on a slide - get the action button info - Get the macro name - and finally get the macro's vb code. From there I can figure out how to parse out which is the correct answer and what the text of the hint is. Any help/ideas would be greatly appreciated. The guy I'm doing this for said he is willing to do it by hand... but that would take a while.

    Read the article

  • BASH echo write mysql input

    - by jmituzas
    Have a bash menu where variables write to file for mysql input. heres what I have: echo "CREATE DATABASE '$mysqldbn'; #GRANT ALL PRIVILEGES ON *.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup' WITH GRANT OPTION; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myip' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'localhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES< LOCK TABLES on '$mysqldbn'.* TO '$mysqlu'@'$rip' IDENTIFIED BY '$mysqlup';" > nmysql.db mysql -u root -p$mypass < nmysql.db problem is to get variables to show I had to put them in single quotes, the single quotes show up as I want for instances like '$mysqlu'@'localhost'. But how can I remove the quotes and still get to use the variable in the instance like, CREATE DATABASE '$mysqldbn' ? Double quotes wont work either, I am at a loss. Thanks in advance, Joe

    Read the article

  • How to store session values with Node.js and mongodb?

    - by Tirithen
    How do I get sessions working with Node.js, [email protected] and mongodb? I'm now trying to use connect-mongo like this: var config = require('../config'), express = require('express'), MongoStore = require('connect-mongo'), server = express.createServer(); server.configure(function() { server.use(express.logger()); server.use(express.methodOverride()); server.use(express.static(config.staticPath)); server.use(express.bodyParser()); server.use(express.cookieParser()); server.use(express.session({ store: new MongoStore({ db: config.db }), secret: config.salt })); }); server.configure('development', function() { server.use(express.errorHandler({ dumpExceptions: true, showStack: true })); }); server.configure('production', function() { server.use(express.errorHandler()); }); server.set('views', __dirname + '/../views'); server.set('view engine', 'jade'); server.listen(config.port); I'm then, in a server.get callback trying to use req.session.test = 'hello'; to store that value in the session, but it's not stored between the requests. It probobly takes something more that this to store session values, how? Is there a better documented module than connect-mongo?

    Read the article

  • PHP header redirection does not reload <iframe> in IE

    - by Marco Demaio
    When displaying data from DB usually I'm in this situation I'm in page A.php that shows data from DB, user performs some action (like edit/delete etc) and page B.php is loaded to perform the action, once page B performed the action, it redirects browser to page A, page A is auto reloaded during step (3) therefor it shows an updated situation of the data In order to make page B to redirect to page A i use a simple PHP header("Location: " . "A.php", TRUE, 302); This works well in all situations, except when pages A.php is displaied into an <iframe>: in such a case it does not reload (step 4 does not get done). This seems to happen only in IE7 (don't know about IE8), it works perfectly on FF/Safari. And only when using an <iframe>, if page A.php is not in <iframe> it gest refreshed also in IE7. In order to solve this I simply added a couple of headers in page A.php to set it to not be cached: header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past But I was curious if you might have experienced the same issue too in the past, and if you good give me some advice about this?

    Read the article

  • Alternatives to LINQ To SQL on high loaded pages

    - by Alex
    To begin with, I LOVE LINQ TO SQL. It's so much easier to use than direct querying. But, there's one great problem: it doesn't work well on high loaded requests. I have some actions in my ASP.NET MVC project, that are called hundreds times every minute. I used to have LINQ to SQL there, but since the amount of requests is gigantic, LINQ TO SQL almost always returned "Row not found or changed" or "X of X updates failed". And it's understandable. For instance, I have to increase some value by one with every request. var stat = DB.Stats.First(); stat.Visits++; // .... DB.SubmitChanges(); But while ASP.NET was working on those //... instructions, the stats.Visits value stored in the table got changed. I found a solution, I created a stored procedure UPDATE Stats SET Visits=Visits+1 It works well. Unfortunately now I'm getting more and more moments like that. And it sucks to create stored procedures for all cases. So my question is, how to solve this problem? Are there any alternatives that can work here? I hear that Stackoverflow works with LINQ to SQL. And it's more loaded than my site.

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >