Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 159/575 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • How Optimize sql query make it faster

    - by user502083
    Hello every one : I have a very simple small database, 2 of tables are: Node (Node_ID, Node_name, Node_Date) : Node_ID is primary key Citation (Origin_Id, Target_Id) : PRIMARY KEY (Origin_Id, Target_Id) each is FK in Node Now I write a query that first find all citations that their Origin_Id has a specific date and then I want to know what are the target dates of these records. I'm using sqlite in python the Node table has 3000 record and Citation has 9000 records, and my query is like this in a function: def cited_years_list(self, date): c=self.cur try: c.execute("""select n.Node_Date,count(*) from Node n INNER JOIN (select c.Origin_Id AS Origin_Id, c.Target_Id AS Target_Id, n.Node_Date AS Date from CITATION c INNER JOIN NODE n ON c.Origin_Id=n.Node_Id where CAST(n.Node_Date as INT)={0}) VW ON VW.Target_Id=n.Node_Id GROUP BY n.Node_Date;""".format(date)) cited_years=c.fetchall() self.conn.commit() print('Cited Years are : \n ',str(cited_years)) except Exception as e: print('Cited Years retrival failed ',e) return cited_years Then I call this function for some specific years, But it's crazy slowwwwwwwww :( (around 1 min for a specific year) Although my query works fine, it is slow. would you please give me a suggestion to make it faster? I'd appreciate any idea about optimizing this query :) I also should mention that I have indices on Origin_Id and Target_Id, so the inner join should be pretty fast, but it's not!!!

    Read the article

  • Queuing using table or MSMQ?

    - by Lieven Cardoen
    A part of the application I'm working on is an swf that shows a test with some 80 questions. Each question is saved to a sql server through WebORB and asp.net. If a candidate finisheds the test, the session needs to be validated. Problem now is that sometimes 350 candidates finish their test at the same moment, and cpu on webserver and sql server explodes (350 validations concurrently). Now, how would I implement queuing here? In the database, there's a table that has a record for each session. One column holds the status. 1 is finished, 2 is validated. I could implement queuing in two ways (as I see it, maybe you have other propositions): A process that checks the table for records with status 1. If it finds one, it validates the session. So, sessions are validated one after one. If a candidate finishes its session, a message is sent to a MSMQ queue. Another process listens to the queue and validates sessions one after one. Now, What would be the best approach? Where do you start the process that will validate sessions? In your global.asax (application_start)? As a windows service? As an exe on the root of the website that is started in application_start? To me, using the table and looking for records with status 1 seems the easiest way.

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • MySQL query against pseudo-key-value pair data in WordPress custom query

    - by andrevr
    I'm writing a custom WordPress query to use some of the data which the Woothemes Diarise theme creates. Diarise is an event planner theme with calendar blah, blah... and uses custom fields to store the event start and end dates in WP custom fields in the *wp_postmeta* table, which implements a key-value store. So for each post in the "event" category, there are 2 records in *wp_postmeta*, named *event_start_date* and *event_end_date* that I'm interested in. The task is to compare a tourist's arrival and departure dates with the start and end dates of events, yielding a what's on list of events available. We thought we'd killed it with a grand flash of logic, that goes like this: Disregard any event that ends before the tourist arrives, and any that begin after the departure date. I wrote this query: SELECT wposts.* FROM wp_posts wposts LEFT JOIN wp_postmeta wpostmeta ON wposts.ID = wpostmeta.post_id LEFT JOIN wp_term_relationships ON (wposts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON (wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id) WHERE wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN(3,4) AND ( wpostmeta.meta_key = 'event_start_date' AND NOT ( concat(subst(wpostmeta.meta_value,7,4),'-',subst(wpostmeta.meta_value,4,2),'-',subst(wpostmeta.meta_value,1,2) > '2010-07-31' ) ) AND ( wpostmeta.meta_key = 'event_end_date' AND NOT ( concat(subst(wpostmeta.meta_value,7,4),'-',subst(wpostmeta.meta_value,4,2),'-',subst(wpostmeta.meta_value,1,2) < '2010-05-01' ) ) ) ORDER BY wpostmeta.meta_value ASC And, of course it returns no records. The problem I believe is in the dual reference to wpostmeta.meta_key, but how to get around that?

    Read the article

  • Can not issue data manipulation statements with executeQuery in java

    - by user225269
    I'm trying to insert records in mysql database using java, What do I place in this code so that I could insert records: String id; String name; String school; String gender; String lang; Scanner inputs = new Scanner(System.in); System.out.println("Input id:"); id=inputs.next(); System.out.println("Input name:"); name=inputs.next(); System.out.println("Input school:"); school= inputs.next(); System.out.println("Input gender:"); gender= inputs.next(); System.out.println("Input lang:"); lang=inputs.next(); Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/employee_record", "root", "MyPassword"); PreparedStatement statement = con.prepareStatement("insert into employee values('id', 'name', 'school', 'gender', 'lang');"); statement.executeUpdate();

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • What should I do with an over-bloated select-box/drop-down

    - by Tristan Havelick
    All web developers run into this problem when the amount of data in their project grows, and I have yet to see a definitive, intuitive best practice for solving it. When you start a project, you often create forms with tags to help pick related objects for one-to-many relationships. For instance, I might have a system with Neighbors and each Neighbor belongs to a Neighborhood. In version 1 of the application I create an edit user form that has a drop down for selecting users, that simply lists the 5 possible neighborhoods in my geographically limited application. In the beginning, this works great. So long as I have maybe 100 records or less, my select box will load quickly, and be fairly easy to use. However, lets say my application takes off and goes national. Instead of 5 neighborhoods I have 10,000. Suddenly my little drop-down takes forever to load, and once it loads, its hard to find your neighborhood in the massive alphabetically sorted list. Now, in this particular situation, having hierarchical data, and letting users drill down using several dynamically generated drop downs would probably work okay. However, what is the best solution when the objects/records being selected are not hierarchical in nature? In the past, of done this with a popup with a search box, and a list, but this seems clunky and dated. In today's web 2.0 world, what is a good way to find one object amongst many for ones forms? I've considered using an Ajaxifed search box, but this seems to work best for free text, and falls apart a little when the data to be saved is just a reference to another object or record. Feel free to cite specific libraries with generic solutions to this problem, or simply share what you have done in your projects in a more general way

    Read the article

  • Linq, should I join those two queries together?

    - by 5YrsLaterDBA
    I have a Logins table which records when user is login, logout or loginFailed and its timestamp. Now I want to get the list of loginFailed after last login and the loginFailed happened within 24 hrs. What I am doing now is get the last login timestamp first. then use second query to get the final list. do you think I should join those two queris together? why not? why yes? var lastLoginTime = (from inRecord in db.Logins where inRecord.Users.UserId == userId && inRecord.Action == "I" orderby inRecord.Timestamp descending select inRecord.Timestamp).Take(1); if (lastLoginTime.Count() == 1) { DateTime lastInTime = (DateTime)lastLoginTime.First(); DateTime since = DateTime.Now.AddHours(-24); String actionStr = "F"; var records = from record in db.Logins where record.Users.UserId == userId && record.Timestamp >= since && record.Action == actionStr && record.Timestamp > lastInTime orderby record.Timestamp select record; }

    Read the article

  • Help dealing with data dependency between two registration forms

    - by franko75
    I have a tricky issue here with a registration of both a user and his/her pet. Both the user and the pet are treated as separate entities and both require separate registration forms. However, the user's pet has to be linked to the user via a foreign key in the database. The process is basically that when a new user joins the site, firstly they register their pet, then they register themselves. The reason for this order is to check their pet's eligibility for the site (there are some criteria to be met) first, instead of getting the user to sign up only to then find out their pet is ineligible. It is this ordering of the form submissions which is causing me a bit of a headache, as follows... The site is being developed with an MVC framework, and the User registration process is managed via a method in a User_form controller, while the pet registration process is managed via a method in the Pet_form controller. The pet registration form happens first, and the pet data can be saved without the owner_id at this stage, with the user id possibly being added (e.g by retrieving pet's id from session) following user registration. However, doing it this way could potentially result in redundant data, where pet records would be created in the database, but if the user doesn't actually register themselves too, then the pets will be ownerless records in the DB. Other option is to serialize the new pet's data at the pet registration stage, don't save it to the DB until the user fills out their registration form. Once the user is created, i can pass serialised data AND the owner_id to a method in the Pet Model which can update the DB. However, I also need to set the newly created $pet to $this-pet which I then access for a sequence of other related forms. Should I just set the session variable in the model method? Then in the Pet controller constructor, do a check for pet stored in session, if yes, assign to $this-pet... If this makes any sense to anybody and you have some advice, i'd be grateful to hear it!

    Read the article

  • Select rows from table1 and all the children from table2 into an object

    - by Patrick
    I want to pull data from table "Province_Notifiers" and also fetch all corresponding items from table "Province_Notifier_Datas". The table "Province_Notifier" has a guid to identify it (PK), table "Province_Notifier_Datas" has a column called BelongsToProvinceID witch is a foreign key to the "Province_Notifier" tables guid. I tried something like this: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) }; This line is not working and i am trying to figure out how topull the data from table2 into that Province_Notifier_Datas variable. Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) I can add a record easily by adding the second table row into the Province_Notifier_Datas but i can't fetch it back. Province_Notifier dbNotifier = new Province_Notifier(); // set some values for dbNotifier dbNotifier.Province_Notifier_Datas.Add( new Province_Notifier_Data { BelongsToProvince_ID = userInput.Value.ProvinceId, EventText = GenerateNotificationDetail(notifierDetail) }); This works and inserts the data correctly into both tables. Edit: These error messages is thrown: Cannot convert from 'Province_Notifier_Data' to 'System.Collections.Generic.IEnumerable' If i look in Visual Studio, the variable "Province_Notifier_Datas" is of type System.Data.Linq.EntitySet The best overloaded method match for 'System.Collections.Generic.List.AddRange(System.Collections.Generic.IEnumerable)' has some invalid arguments Edit: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID into data2list select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = new EntitySet<Province_Notifier_Data>().AddRange(data2List) }; Error 3 The name 'data2List' does not exist in the current context.

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • How to disable items in a List View???

    - by Techeretic
    I have a list view which is populated via records from the database. Now i have to make some records visible but unavailable for selection, how can i achieve that? here's my code public class SomeClass extends ListActivity { private static List<String> products; private DataHelper dh; public void onCreate(Bundle savedInstanceState) { dh = new DataHelper(this); products = dh.GetMyProducts(); /* Returns a List<String>*/ super.onCreate(savedInstanceState); setListAdapter(new ArrayAdapter<String>(this, R.layout.myproducts, products)); ListView lv = getListView(); lv.setTextFilterEnabled(true); lv.setOnItemClickListener( new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> arg0, View arg1, int arg2, long arg3) { // TODO Auto-generated method stub Toast.makeText(getApplicationContext(), ((TextView) arg1).getText(), Toast.LENGTH_SHORT).show(); } } ); } } The layout file myproducts.xml is as follows <?xml version="1.0" encoding="utf-8"?> <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" android:padding="10dp" android:textSize="16sp"> </TextView>

    Read the article

  • Memory Leakage using datatables

    - by Vix
    Hi, I have situation in which i'm compelled to retrieve 30,000 records each to 2 datatables.I need to do some manipulations and insert into records into the SQL server in Manipulate(dt1,dt2) function.I have to do this in 15 times as you can see in the for loop.Now I want to know what would be the effective way in terms of memory usage.I've used the first approach.Please suggest me the best approach. (1) for (int i = 0; i < 15; i++) { DataTable dt1 = GetInfo(i); DataTable dt2 = GetData(i); Manipulate(dt1,dt2); } (OR) (2) DataTable dt1 = new DataTable(); DataTable dt2 = new DataTable(); for (int i = 0; i < 15; i++) { dt1=null; dt2=null; dt1 = GetInfo(); dt2 = GetData(); Manipulate(dt1, dt2); } Thanks, Vix.

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

  • C# comparing two files regex problem.

    - by Mike
    Hi everyone, what I'm trying to do is open a huge list of files (about 40k records, and match them on a line in a file that contains 2 millions records. And if my line from file A matches a line in file B write out that line. File A contains a bunch of files without extensions and file B contains full file paths including extensions. i'm using this but i cant get it to go... string alphaFilePath = (@"C:\Documents and Settings\g\Desktop\Arrp\Find\natst_ready.txt"); List<string> alphaFileContent = new List<string>(); using (FileStream fs = new FileStream(alphaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { alphaFileContent.Add(rdr.ReadLine()); } } string betaFilePath = @"C:\Documents and Settings\g\Desktop\Arryup\Find\eble.txt"; StringBuilder sb = new StringBuilder(); using (FileStream fs = new FileStream(betaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { string betaFileLine = rdr.ReadLine(); string matchup = Regex.Match(alphaFileContent, @"(\\)(\\)(\\)(\\)(\\)(\\)(\\)(\\)(.*)(\.)").Groups[9].Value; if (alphaFileContent.Equals(matchup)) { File.AppendAllText(@"C:\array_tech.txt", betaFileLine); } } } This doesnt work because the alphafilecontent is a single line only and i'm having a hard time figuring out how to get my regex to work on the file that contains all the file paths (Betafilepath) here is a sample of the beta file path. C:\arres_i\Grn\Ora\SEC\DBZ_EX1\Nes\001\DZO-EX00001.txt Here is the line i'm trying to compare from my alpha DZO-EX00001

    Read the article

  • AWK: compare apache dates without using regular expression

    - by smallmeans
    I'm writing a loganalysis application and wanted to grab apache log records between two certain dates. Assume that a date is formated as such: 22/Dec/2009:00:19 (day/month/year:hour:minute) Currently, I'm using a regular expression to replace the month name with its numeric value, remove the separators, so the above date is converted to: 221220090019 making a date comparison trivial.. but.. Running a regex on each record for large files, say, one containing a quarter million records, is extremely costly.. is there any other method not involving regex substitution? Thanks in advance Edit: here's the function doing the convertion/comparison function dateInRange(t, from, to) { sub(/[[]/, "", t); split(t, a, "[/:]"); match("JanFebMarAprMayJunJulAugSepOctNovDec", a[2]); a[2] = sprintf("%02d", (RSTART + 2) / 3); s = a[3] a[2] a[1] a[4] a[5]; return s >= from && s <= to; } "from" and "to" are the intervals in the aforementioned format, and "t" is the raw apache log date/time field (e.g [22/Dec/2009:00:19:36)

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • Using PHP Frameworks to get Web 2.0 or Ajax and Other Special Features

    - by user504958
    I'm still struggling to understand when or how to use a framework such as Zend or Yii. Here's some of the features I'm going to need on my next project and I don't understand frameworks well enough to know where the framework fits into the picture. I won't say exactly what the project is but think about something like Yelp or Merchant Circle, on a smaller scale of course - a directory project. It will contain a search box and links to all and/or popular categories. 1) Autosuggest in Search box. (I already know how to do this using jQuery) 2) Analyze the search terms entered into the search box to determine if they misspelled a word. Offer to correct the misspelling or automatically correct the word and show relevant results. 3) Offer items, links, or ads that are related to their search term. 4) Allow users to determine which fields are shown. 5) Allow users to sort the results however they choose. 6) Allow editing of records on a grid/list view. Post form without refreshing the page. Delete or Add records without going to a different page or reloading the current page.

    Read the article

  • How do I implement Hibernate Pagination using a cursor (so the results stay consistent, despite new

    - by hunterae
    Hey all, Is there any way to maintain a database cursor using Hibernate between web requests? Basically, I'm trying to implement pagination, but the data that is being paged is consistently changing (i.e. new records are added into the database). We are trying to set it up such that when you do your initial search (returning a maximum of 5000 results), and you page through the results, those same records always appear on the same page (i.e. we're not continuously running the query each time next and previous page buttons are clicked). The way we're currently implementing this is by merely selecting 5000 (at most) primary keys from the table we're paging, storing those keys in memory, and then just using 20 primary keys at a time to fetch their details from the database. However, we want to get away from having to store these keys in memory and would much prefer a database cursor that we just keep going back to and moving backwards and forwards over the cursor to generate pages. I tried doing this with Hibernate's ScrollableResults but found that I could not call methods like next() and previous() would cause an exception if you within a different web request / Hibernate session (no surprise there). Is there any way to reattach a ScrollableResults object to a Session, much the same way you would reattach a detached database object to make it persistent? Are there any other approaches to implement this data paging with consistent paging results without caching the primary keys?

    Read the article

  • Accessing an Access DB from Outlook via VBA

    - by camastanta
    Hi The situation: In Outlook I get a message from a server. The content of the message needs to be put into an Access db. But, there may not exist another message with the same date. So, I need to look into a db if there is already a message with the same date and time. If there exists one, then it needs to be replaced and otherwise the message needs to be added to the database. The database contains a list of current positions from the vehicles on the road. The problem: I have problems to compare a date time with a date time in an Access DB via VBA. The query I use returns no records but there is a record in the database. This is the query I use: adoRS.Open "SELECT * FROM currentpositions WHERE ((currentpositions. [dateLT])=" & "#" & date_from_message & "#" & ")", adoConn, adOpenStatic, adLockOptimistic Second I need to now what the result is of that query. How can I determine the number of records that my query gives me? Thanks camastanta

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >