Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 134/402 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Removing a result from Queryset

    - by Enrico
    Is there a simple way to discard/remove the last result in a queryset without affecting the db? I am trying to paginate results in Django, but don't know the total number of objects for a given query. I was planning on using next/previous or older/newer links, so I only need to know if this is the first and/or last page. First is easy to check. To check for the last page I can compare the number of results with the pagesize or make a second query. The first method fails to detect the last page when the number of results in the last set equals the pagesize (ie 100 records get broken into 10 pages with the last page containing exactly 10 results) and I would like to avoid making a second query. My current thought is that I should fetch pagesize + 1 results from the db. If the queryset length equals 11, I know this is not the last page and I want to discard the last result in the queryset before passing the queryset to the template.

    Read the article

  • sync framework server to server synchronization

    - by nihi_l_ist
    I have kind of a such scenario: Here i need to synchronize local server database with main DB server(example: computers in office are connected to office server and they use it like a local server, so that no sync is required.BUT computers in other office work with their local server too and we need synchronization between the offices though the main DB server.). As i see i cant use SQLCompact here. Is there a provider to do the server-to-server synchronization right from the client? If no can someone provide a sample of solution of how to manage such situation?

    Read the article

  • navigator.onLine

    - by cf_PhillipSenn
    I'm playing with the incomplete example found at http://www.w3.org/TR/offline-webapps/ But I'm distressed to see comments in it like: "renders the note somewhere", and "report error", and "// …" So, will someone please help me write a valid example? Here's what I've got so far: <!DOCTYPE HTML> <html manifest="cache-manifest"> <head> <script> var db = openDatabase("notes", "", "The Example Notes App!", 1048576); function renderNote(row) { // renders the note somewhere } function reportError(source, message) { // report error } function renderNotes() { db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { for(var i = 0; i < rs.rows.length; i++) { renderNote(rs.rows[i]); } }); }); } function insertNote(title, text) { db.transaction(function(tx) { tx.executeSql('INSERT INTO Notes VALUES(?, ?)', [ title, text ], function(tx, rs) { // … }, function(tx, error) { reportError('sql', error.message); }); }); } </script> <style> label { display:block; } </style> </head> <body> <form> <label for="mytitle">Title:</label> <input name="mytitle"> <label for="mytext">Text:</label> <textarea name="mytext"></textarea> <!-- There is no submit button because I want to save the info on every keystroke --> </form> </body> </html> I also know that I have to incorporate this in there somewhere: if (navigator.onLine) { // Send data using XMLHttpRequest } else { // Queue data locally to send later } But I'm not sure what even I would tie that too.

    Read the article

  • Fluent NHibernate SchemaExport to SQLite not pluralizing Table Names

    - by weenet
    I am using SQLite as my db during development, and I want to postpone actually creating a final database until my domains are fully mapped. So I have this in my Global.asax.cs file: private void InitializeNHibernateSession() { Configuration cfg = NHibernateSession.Init( webSessionStorage, new [] { Server.MapPath("~/bin/MyNamespace.Data.dll") }, new AutoPersistenceModelGenerator().Generate(), Server.MapPath("~/NHibernate.config")); if (ConfigurationManager.AppSettings["DbGen"] == "true") { var export = new SchemaExport(cfg); export.Execute(true, true, false, NHibernateSession.Current.Connection, File.CreateText(@"DDL.sql")); } } The AutoPersistenceModelGenerator hooks up the various conventions, including a TableNameConvention like so: public void Apply(FluentNHibernate.Conventions.Instances.IClassInstance instance) { instance.Table(Inflector.Net.Inflector.Pluralize(instance.EntityType.Name)); } This is working nicely execpt that the sqlite db generated does not have pluralized table names. Any idea what I'm missing? Thanks.

    Read the article

  • Using a table-alias in Kohana queries?

    - by Aristotle
    I'm trying to run a simple query with $this->db in Kohana, but am running into some syntax issues when I try to use an alias for a table within my query: $result = $this->db ->select("ci.chapter_id, ci.book_id, ci.chapter_heading, ci.chapter_number") ->from("chapter_info ci") ->where(array("ci.chapter_number" => $chapter, "ci.book_id" => $book)) ->get(); It seems to me that this should work just fine. I'm stating that "chapter_info" ought to be known as "ci," yet this isn't taking for some reason. The error is pretty straight-forward: There was an SQL error: Table 'gb_data.chapter_info ci' doesn't exist - SELECT `ci`.`chapter_id`, `ci`.`book_id`, `ci`.`chapter_heading`, `ci`.`chapter_number` FROM (`chapter_info ci`) WHERE `ci`.`chapter_number` = 1 AND `ci`.`book_id` = 1 If I use the full table name, rather than an alias, I get the expected results without error. This requires me to write much more verbose queries, which isn't ideal. Is there some way to use shorter names for tables within Kohana's query-builder?

    Read the article

  • ImageMagick: Tiff to PDF from PHP

    - by Sheldon
    How can I convert 2 tiff images to PDF, I already knows how to get the image out of the DB, and I print it using echo and setting up the MIME type. But, right know I need to use a duplex printer option, so I need a way to generate a PDF from inside my PHP page, that PDF must containt both TIFF images (one per page) How can I do that? What do I need for php to work with that library. Thank you very much. EDIT: Is a self hosted app, I own the server (actually I'm using WAMP 2). I extract the images from the MySQL DB (stored using LONGBLOBS).

    Read the article

  • Prepare and import data into existing database

    - by Álvaro G. Vicario
    I maintain a PHP application with SQL Server backend. The DB structure is roughly this: lot === lot_id (pk, identify) lot_code building ======== buildin_id (pk, identity) lot_id (fk) inspection ========== inspection_id (pk, identify) building_id (fk) date inspector result The database already has lots and buildings and I need to import some inspections. Key points are: It's a one-time initial load. Data comes in an Excel file. The Excel data is unaware of DB autogenerated IDs: inspections must be linked to buildings through their lot_code What are my options to do such data load? date inspector result lot_code ========== =========== ======== ======== 31/12/2009 John Smith Pass 987654X 28/02/2010 Bill Jones Fail 123456B

    Read the article

  • flex actionscript Datagridcolumn array

    - by Jad
    Hi, We have an AIR app and using a datagrid. We want to store the dataGrid.columns array in mySQL DB through PHP. This is needed because the user can customise the column headers of the datagrid and his preference needs to be stored and shown to him on his next login. Using HTTPService, we tried sending the dataGrid.columns array as a string, as follows, var ht:HTTPService = new HTTPService(); ht.url = Config.getServerURL(); ht.method = URLRequestMethod.POST; ht.resultFormat = "text"; ht.request["action"] = "updateGrid"; ht.request["headercolumns"] = colsArray.toString(); The data is stord as comma separated array string in DB. When we retrieve it back, cannot seem to cast it back to the DatagridColumns and assign it. Please let me know. Regards Jada.

    Read the article

  • ASP.NET cached aspx page & IIS logs

    - by Vishal Seth
    Hi guys, Is there any way to find out if ASP.Net runtime has served a cached copy of ASPX page or actually went through the page life cycle? Here is my problem: I'm seeing many entries in my IIS log files that were served successfully (200 OK). I've a corresponding logging code (Log4Net API) in the Session_Start and Application_BeginRequest() events that is logging every request to my DB with more details. I'm not seeing any corresponding entries in my SQL DB for some cases that should have been created by Log4Net code. Are there any logs available to find out if a cached copy was served by .NET worker process? Moreover, if my logging code would throw an exception, won't that show up as 500 in IIS logs? The code is on Windows 2008 Server, IIS 7.

    Read the article

  • Please suggest some alternative to Drupal

    - by abovesun
    Drupal propose completely different approach in web development (comparing with RoR like frameworks) and it is extremely good from development speed perspective. For example, it is quite easy to clone 90% of stackoverflow functionality using Drupal. But it has several big drawbacks: it is f''cking slow (100-400 requests per page) db structure very complicated, need at least 2 tables for easy content (entity) type, CCK fields very easy generate tons of new db tables anti-object oriented, rather aspect-oriented bad "view" layer implementation, no strange forward layouts and so on. After all this items I can say I like Drupal, but I would like something same, but more elegant and more object oriented. Probably something like http://drupy.net/ - drupal emulation on the top of django. P.S. I wrote this question not for new holy word flame, just write if you know alternative that uses something similar approach.

    Read the article

  • SQL efficiency argument, add a column or solvable by query?

    - by theTurk
    I am a recent college graduate and a new hire for software development. Things have been a little slow lately so I was given a db task. My db skills are limited to pet projects with Rails and Django. So, I was a little surprised with my latest task. I have been asked by my manager to subclass Person with a 'Parent' table and add a reference to their custodian in the Person table. This is to facilitate going from Parent to Form when the custodian, not the Parent, is the FormContact. Here is a simplified, mock structure of a sql-db I am working with. I would have drawn the relationship tables if I had access to Visio. We have a table 'Person' and we have a table 'Form'. There is a table, 'FormContact', that relates a Person to a Form, not all Persons are related to a Form. There is a relationship table for Person to Person relationships (Employer, Parent, etc.) I've asked, "Why this couldn't be handled by a query?" Response, Inefficient. (Really!?!) So, I ask, "Why not have a reference to the Form? That would be more efficient since you wouldn't be querying the FormContacts table with the reference from child/custodian." Response, this would essentially make the Parent is a FormContact. (Fair enough.) I went ahead an wrote a query to get from non-FormContact Parent to Form, and tested on the production server. The response time was instantaneous. *SOME_VALUE* is the Parent's fk ID. SELECT FormID FROM FormContact WHERE FormContact.ContactID IN (SELECT SourceContactID FROM ContactRelationship WHERE (ContactRelationship.RelatedContactID = *SOME_VALUE*) AND (ContactRelationship.Relationship = 'Parent')); If I am right, "This is an unnecessary change." What should I do, defend my position or should I concede to the managers request? If I am wrong. What is my error? Is there a better solution than the manager's?

    Read the article

  • trying to use ActiveRecord with Sinatra, Migration fails question

    - by David Lazar
    Hi, running Sinatra 1.0, I wanted to add a database table to my program. In my Rakefile I have a task task :environment do ActiveRecord::Base.establish_connection(YAML::load(File.open('config/database.yml'))["development"]) end I have a migration task in my namespace that calls the migration code: namespace :related_products do desc "run any migrations we may have in db/migrate" task :migrate => :environment do ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) end My console pukes out an error when the call to ActiveRecord::MIgrator.migrate() is made. rake aborted! undefined method `info' for nil:NilClass The migration code itself is pretty simple... and presents me with no clues as to what this missing info class is. class CreateStores < ActiveRecord::Migration def self.up create_table :stores do |t| t.string :name t.string :access_url t.timestamps end end def self.down drop_table :stores end end I am a little mystified here and am looking for some clues as to what might be wrong. Thanks!

    Read the article

  • Dynamic data validation in ASP.NET MVC

    - by user252160
    I've recently read about the model validation capabilities of ASP.NET MVC which are all very cool until a certain point. What happens if the application doesn't know the data that it works with because it is all stored in DB and built together at runtime. Just like in Drupal, I'd like to be able to define custom types at runtime, and assign runtime validation rules as well. Obviously, the idea of assigning attributes to well established models is now gone. What else could be done ? I am thinking in terms of rules being stored as JSON objects in the DB fields or something like that.

    Read the article

  • Data Access Layer - static list objects and caching

    - by Truegilly
    Hello, i am devloping a site using .net MVC i have a data access layer which basically consists of static list objects that are created from data within my database. The method that rebuilds this data first clears all the list objects. Once they are empty it then add the data. Here is an example of one of the lists im using. its a method which generates all the UK postcodes. there are about 50 methods similar to this in my application that return all sorts of information, such as towns, regions, members, emails etc. public static List<PostCode> AllPostCodes = new List<PostCode>(); when the rebuild method is called it first clears the list. ListPostCodes.AllPostCodes.Clear(); next it re-bulilds the data, by calling the GetAllPostCodes() method /// <summary> /// static method that returns all the UK postcodes /// </summary> public static void GetAllPostCodes() { using (fab_dataContextDataContext db = new fab_dataContextDataContext()) { IQueryable AllPostcodeData = from data in db.PostCodeTables select data; IDbCommand cmd = db.GetCommand(AllPostcodeData); SqlDataAdapter adapter = new SqlDataAdapter(); adapter.SelectCommand = (SqlCommand)cmd; DataSet dataSet = new DataSet(); cmd.Connection.Open(); adapter.FillSchema(dataSet, SchemaType.Source); adapter.Fill(dataSet); cmd.Connection.Close(); // crete the objects foreach (DataRow row in dataSet.Tables[0].Rows) { PostCode postcode = new PostCode(); postcode.ID = Convert.ToInt32(row["PostcodeID"]); postcode.Outcode = row["OutCode"].ToString(); postcode.Latitude = Convert.ToDouble(row["Latitude"]); postcode.Longitude = Convert.ToDouble(row["Longitude"]); postcode.TownID = Convert.ToInt32(row["TownID"]); AllPostCodes.Add(postcode); postcode = null; } } } The rebuild occurs every 1 hour. this ensures that every 1 hour the site will have fresh set of cached data. the issue ive got is that occasionally if during a rebuild, the server will be hit by a request and an exception is thrown. The exception is "Index was outside the bounds of the array." it is due to when a list is being cleared. ListPostCodes.AllPostCodes.Clear(); - // throws exception - although its not always in regard to this list. Once this exception is thrown application dies, All users are affected. I have to restart the server to fix it. i have 2 questions... If i utilise caching instead of static objects would this help ? Is there any way i can say "while the rebuild is taking place, wait for it to complete until accepting requests" any help is most appricaiated ;) truegilly

    Read the article

  • In django models, how to make all table names not have the app label?

    - by Luigi
    I have a database that was already being used by other applications before i began writing a web interface with django for it. The table names follow simple naming standards, so the django model Customer should map to the table "customer" in the db. At the same time I'm adding new tables/models. Since I find it cumbersome to use app_customer every time i have to write a query (django's ORM is definitely not enough for them) in the other applications and I don't want to rename the existing tables, what is the best way to make all models in my django app use tables without applabel_, besides adding a Meta class with db_table= to each model? Is there any reason why I shouldn't do this? I have only one web app that needs to access this db, everything else doesn't use django models.

    Read the article

  • Querying MySQL with CodeIgniter, selecting rows where field is NULL

    - by rebellion
    I'm using CodeIgniter's Active Record class to query the MySQL database. I need to select the rows in a table where a field is not set to NULL: $this->db->where('archived !=', 'NULL'); $q = $this->db->get('projects'); That only returns this query: SELECT * FROM projects WHERE archived != 'NULL'; The archived field is a DATE field. Is there a better way to solve this? I know I can just write the query myself, but I wan't to stick with the Active Record throughout my code.

    Read the article

  • problems with unpickling a 80 megabyte file in python

    - by tipu
    I am using the pickle module to read and write large amounts of data to a file. After writing to the file a 80 megabyte pickled file, I load it in a SocketServer using class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): print("in handle") words_file_handler = open('/home/tipu/Dropbox/dev/workspace/search/words.db', 'rb') words = pickle.load(words_file_handler) tweets = shelve.open('/home/tipu/Dropbox/dev/workspace/search/tweets.db', 'r'); results_per_page = 25 query_details = self.request.recv(1024).strip() query_details = eval(query_details) query = query_details["query"] page = int(query_details["page"]) - 1 return_ = [] booleanquery = BooleanQuery(MyTCPHandler.words) if query.find("(") > -1: result = booleanquery.processAdvancedQuery(query) else: result = booleanquery.processQuery(query) result = list(result) i = 0 for tweet_id in result and i < 25: #return_.append(MyTCPHandler.tweets[str(tweet_id)]) return_.append(tweet_id) i += 1 self.request.send(str(return_)) However the file never seems to load after the pickle.load line and it eventually halts the connection attempt. Is there anything I can do to speed this up?

    Read the article

  • Increasing speed of webservice - howto

    - by Koran
    Hi, Our client-server product has the protocol between them as XML over HTTP. Here, the client asks a GET/POST query to the web server and the server responds with XML. The server is written using django. The server has to be on the web because there are many clients across the world using this. The server code uses extensive memoization and also there is very less db queries - most queries dont have any db queries, some of them has max 1. The biggest problem is the speed. Every query takes close to 5 seconds for the reply. The data replied is also very less - in the range of 4-6 Kb. What are the mechanisms to improve speed of the web service? Is this the usual way of writing a client-server? Are there other technologies and are we missing out on it? Thank you K

    Read the article

  • ODBC Linked server in sql 2005 doesn’t work from remote box

    - by mhj96813
    I have a dev workstation with sql 2005 installed and in it I created a linked server to a odbc connection to a clarion database. I can run select statements against it inside sql Mgt studio. When I take a second workstation and connect to the sql on the first box using sql mgt studio, then try the exact same query I get OLE DB provider "MSDASQL" for linked server "liveclarion" returned message "[SoftVelocity Inc.][TopSpeed ODBC Driver][ISAM]ISAM Table Not Found". Any thoughts? It appears to have the same functionality on a second sql server. No remote sql mgt studio connect success in queries against my linked ODBC clarion DB. All done with windows authentication and the same AD user.

    Read the article

  • How do I do automatic data serialization of data objects in Haskell

    - by Adam Gent
    One of the huge benefits in languages that have some sort of reflection/introspecition is that objects can be automatically constructed from a variety of sources. For example in Java I can use the same objects for persisting to a db (with Hibernate) serializing to XML (with JAXB) or serializing to JSON (json-lib). You can do the same in Ruby and Python also usually following some simple rules for properties or annotations for Java. Thus I don't need lots "Domain Transfer Objects". I can concentrate on the domain I am working in. It seems in very strict FP like Haskell and Ocaml this is not possible. Particularly Haskell. The only thing I have seen is doing some sort of preprocessing or meta-programming (ocaml). Is it just accepted that you have to do all the transformations from the bottom upwards? In other words you have to do lot of boring work to turn a data type in haskell into JSON/XML/DB Row object and back again into a data object.

    Read the article

  • Invoke a cleanup method for java user thread, when JVM stops the thread

    - by user309281
    Hi All I have J2SE application running in linux. I have stop application script in which i am doing kill of the J2SE pid. This J2SE application has 6 infinitely running user threads,which will be polling for some specific records in backend DB. When this java pid is killed, I need to perform some cleanup operations for each of the long running thread, like connecting to DB and set status of some transactions which are in-progress to empty. Is there a way to write a method in each of the thread, which will be called when the thread is going to be stopped, by JVM.

    Read the article

  • scheduled task or windows service

    - by czuroski
    Hello, I have to create an app that will read in some info from a db, process the data, write changes back to the db, and then send an email with these changes to some users or groups. I will be writing this in c#, and this process must be run once a week at a particular time. This will be running on a Windows 2008 Server. In the past, I would always go the route of creating a windows service with a timer and setting the time/day for it to be run in the app.config file so that it can be changed and only have to be restarted to catch the update. Recently, though, I have seen blog posts and such that recommend writing a console application and then using a scheduled task to execute it. I have read many posts talking to this very issue, but have not seen a definitive answer about which process is better. What do any of you think? Thanks for any thoughts.

    Read the article

  • Apply limit in mapreduce function in php?

    - by Rohan Kumar
    How to apply limit in php, mongodb when using mapreduce function? I tried this $cmd=array(// codition array "mapreduce" => "user", "map" => $map, "reduce" => $reduce, "out" => array("inline" => 1), "limit"=>2 ); $db=connect(); $query = $db->command($cmd);// run command But its not working it gives 2 documents.I can't use limit on sub documents. If I have 100's of sub documents and then I want paging in sub documents.Then it fails.Is it possible to apply limit on sub documents?

    Read the article

  • Does using web services to expose a .NET DAL add security?

    - by Jonno
    Currently my employer deploys a web application over 3 servers. DB - No public route Web Service DAL - No public route Web Server - Public route The reason for this is the theory that if the web server is compromised, they don't arrive at the DB directly, but instead arrive at the DAL box. To my mind, as the DAL box and Web Sever box - both run windows/IIS - if the public box has been compromised, the same exploit would likely work on the DAL box - therefore I do not see this as a real security benefit. I would like to propose we remove the middle machine and allow the web server to connect directly to the database. Is this middle box really a benefit?

    Read the article

  • Interview question - c#

    - by ltech
    I was tasked to conduct my first interview and would like to pose my question to this world for both their feedback on my question and also on their solutions. Question: I have a legacy system with users and files, the info of all files pertaining to a user are stored on a flat file. I want to upgrade this system by storing all info on a db, design tables, and create a C# system that will populate the new db as well as ftp the files to a new path. Define the desgin consideration and develop a prototype. Note: We are looking more for what design one would use and why rather than code that compiles. If it does then kudos to you and we will give it more weight. @Tim C, I did show the interviewee the file: User1234.txt UserID=1234 ParentPath=\\somewhere\nowehere\everywhere\1234 FileCount=20 File0=something0.ext .. File19=something19.ext @Tim C, I have never conducted an interview and I followed a script given to me by my senior developer who was absent.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >