Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 36/575 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Emails going into junk and spf records

    - by user346443
    Hi, our emails are being flagged as junk. I have two different webistes both with their own dedicted ip address. sitea.com = xx.xx.xx.43 siteb.com = xx.xx.xx.44 Im using hmailserver to host our emails and have the smtp bound to the ip address of xx.xx.xx.42 im aware that i can set up a spf record to state that the what servers emails can be sent from v=spf1 mx ip4:xx.xx.xx.43 mx:mail.sitea.com ip4:xx.xx.xx.42 -all Would the fact that email are not sent from the sites ip's be causing them to be flagged as junk. Cheers Cam

    Read the article

  • How to display how many times each records in a table used by other table

    - by Fredy
    I have a problem with my query, below are two tables that tbl_tag and tbl_tag_usedby. I want to show how much of each record in tbl_tag used by record in tbl_tag_usedby. Here is a query that I use: SELECT t.*, COUNT(u.tagid) AS totale FROM tbl_tag t LEFT JOIN tbl_tag_usedby u ON u.tagid = t.id AND t.status =1 GROUP BY u.tagid and the results are as below: In this case the record id from 2 to 6 do not appear in the query results, I want record id from 2 to 6 are also shown and on the field "totale" its value is 0. Can anyone help me?

    Read the article

  • Grouping records by subsets SQL

    - by Stacy
    I have a database with PermitHolders (PermitNum = PK) and DetailedFacilities of each Permit Holder. In the tblPermitDetails table there are 2 columns PermitNum (foreign Key) FacilityID (integer Foreign Key Lookup to Facility table). A permitee can have 1 - 29 items on their permit, e.i. Permit 50 can have a Boat Dock (FacID 4), a Paved walkway (FacID 17) a Retaining Wall (FacID 20) etc. I need an SQL filter/display whatever, ALL PERMIT #s that have ONLY FacIDs 19, 20, or 28, NOT ones that have those plus "x" others,....just that subset. I've worked on this for 4 days, would someone PLEASE help me? I HAVE posted to other BB but have not received any helpful suggestions.

    Read the article

  • django DateTimeField list records by day

    - by dotty
    Hay, i have a field in one of my models which saves the creation date of an object created_on = models.DateTimeField(blank=False, auto_now_add=True) This works as expected. In my templates i want to list objects like this June 15 {{ objects here which was created on June 15 }} June 14 {{ objects here which was created on June 14 }} etc Any idea how i would go about doing this? Thanks in advance.

    Read the article

  • SharePoint - Auto-increment dates in new records?

    - by ACal
    Hello, I have a list that's going to be updated with relatively static data weekly, and I wanted to create a workflow to do this automatically. The only field I'm having trouble with is Start Date. I want the new Start Date to be exactly one week after the previous week's (row's) Start Date, but I can't figure out how to capture this. I can't seem to find an easy way to get the value of the previous row. Now, theoretically, I could just have the workflow run once a week on a given day and use [Today] as the value for the field; however, a requirement is that the list can be populated a few weeks in advance if needed. Thanks in advance for any help you can provide!

    Read the article

  • how do I get the 2 most recent records

    - by fishhead
    I have a table similar to the example shown below. I would like to be able to select the two most recent entrys for each accountNo. I am using Microsoft SQL. Thank you for any help that you can provide. AccountNo, DateOfOrder, OrderID ----------------------------------------- 123, March 1 2010, 1 222, March 3 2010, 2 123, April 1 2010, 3 345, March 15 2010, 77 123, june 1 2010, 55 123, march 5 2010, 33 345, march 1 2010, 99 222, june 1 2010, 7 222, june 2 2010, 22

    Read the article

  • Indexing table with duplicates MySQL/MSSQL with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In MSSQL you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • R problems using rpart with 4000 records and 13 attributes

    - by josh
    I have attempted to email the author of this package without success, just wondering if anybody else has experienced this. I am having an using rpart on 4000 rows of data with 13 attributes. I can run the same test on 300 rows of the same data with no issue. When I run on 4000 rows, Rgui.exe runs consistently at 50% cpu and the UI hangs.... it will stay like this for at least 4-5hours if I let it run, and never exit or become responsive. here is the code I am using both on the 300 and 4000 size subset : train<-read.csv("input.csv",header=T) y<-train[,18] x<-train[,3:17] library(rpart) fit<-rpart(y~.,x) Is this a known limitation of rpart, am I doing something wrong? potential workarounds? any assistance appreciated

    Read the article

  • Adding new records in Access without wrecking the form

    - by Matt Parker
    I'm working on a simple Access 2003 application to keep track of things that need to be done for clients for some colleagues. Each colleague has a set of clients, and each client has a set of actions that need to be taken by a certain date. I've set up a form that consists of a combobox for client ID (indexed), a drop-down for the person who is handling that client's case, and a button for adding new clients (a standard Access-created Add Record button). The actions are listed in a subform below these three elements. The problem I've run into is that the first person I tested this on clicked the button to add a new record, then didn't fill it out and tried to select another client from the drop-down list. Access interprets this as an attempt to set the selected Client ID as the ID for the new record and rightfully throws an error for duplicate primary keys. I can think of a couple of ways around this problem, but I'd much rather hear your elegant solutions than kludge together some junk in a language I don't know. Let me know if you have any questions. Thank you.

    Read the article

  • help on integrating oracle BI into existing application

    - by ywang1129
    I have an existing application written in perl. Now i need to integrate this application with ocbi. The plan is having button that user can click on to open ocbi in iframe. The ocbi resides on a different server from the running application. Has anyone done this before, know what is the best practice of doing this, and what is the effort of doing this. another question is is it possible to add customize the ocbi displayed in iframe. thanks

    Read the article

  • How to render all records from a nested set into a real html tree

    - by Christoph Schiessl
    I'm using the awesome_nested_set plugin in my Rails project. I have two models that look like this (simplified): class Customer < ActiveRecord::Base has_many :categories end class Category < ActiveRecord::Base belongs_to :customer # Columns in the categories table: lft, rgt and parent_id acts_as_nested_set :scope => :customer_id validates_presence_of :name # Further validations... end The tree in the database is constructed as expected. All the values of parent_id, lft and rgt are correct. The tree has multiple root nodes (which is of course allowed in awesome_nested_set). Now, I want to render all categories of a given customer in a correctly sorted tree like structure: for example nested <ul> tags. This wouldn't be too difficult but I need it to be efficient (the less sql queries the better). Update: Figured out that it is possible to calculate the number of children for any given Node in the tree without further SQL queries: number_of_children = (node.rgt - node.lft - 1)/2. This doesn't solve the problem but it may prove to be helpful.

    Read the article

  • Selecting first records of a type in a given period

    - by Emanuil Rusev
    I have a database table that stores user comments: comments(id, user_id, created_at) I want to get from it the number of users that have commented for the first time in the past week. Here's what I have so far: SELECT COUNT(DISTINCT `user_id`) FROM `comments` WHERE `created_at` BETWEEN DATE_SUB(NOW(), INTERVAL 7 DAY) AND NOW() This would give the number of users that have commented, but it would not take into consideration whether these comments are first for these users.

    Read the article

  • Accessing individual HABTM records in a form

    - by Pichan
    I'm building a form in my CakePHP project that lets you edit a company's information. Among all other things, every company has at least one geographical area in which the company operates, but it may have more. The areas are selected individually using select dropdowns. The relationship between companies and areas is HABTM, because I need to be able to change the amount of associated areas without modifying the database. Currently the associations and corresponding data are handled separately, which isn't really a problem but I was wondering how it could be done using as much Cake's own 'automagic' functionality as possible?

    Read the article

  • Generate T-SQL for Existing Indexes

    - by Chris S
    How do you programmatically generate T-SQL CREATE statements for existing indexes in a database? SQL Studio provides a "Script Index as-Create to" command that generates code in the form: IF NOT EXISTS(SELECT * FROM sys.indexes WHERE name = N'IX_myindex') CREATE NONCLUSTERED INDEX [IX_myindex] ON [dbo].[mytable] ( [my_id] ASC )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY] GO How would you do this programmatically (ideally through Python)?

    Read the article

  • INSERT 0..n records into table 'A' based on content of table 'B' in MySql 5

    - by Robert Gowland
    Using MySql 5, I have a task where I need to update one table based on the contents of another table. For example, I need to add 'A1' to table 'A' if table 'B' contains 'B1'. I need to add 'A2a' and 'A2b' to table 'A' if table 'B' contains 'B2', etc.. In our case, the value in table 'B' we're interested is an enum. Right now I have a stored procedure containing a series of statements like: INSERT INTO A SELECT 'A1' FROM B WHERE B.Value = 'B1'; --Repeat for 'B2' -> 'A2a'; 'B2' -> 'A2b'; 'B3' -> 'A3', etc... Is there a nicer more DRY way of accomplishing this? Edit: There may be values in table 'B' that have no equivalent value for table 'A'.

    Read the article

  • PHP memory exhausted when running through thousands of records

    - by James Skidmore
    I'm running the following code over a set of 5,000 results. It's failing due to the memory being exhausted. foreach ($data as $key => $report) { $data[$key]['data'] = unserialize($report['serialized_values']); } I know I can up the memory limit, but I'd like to run this without a problem instead. I'm not going to be able to keep upping the memory forever. EDIT The $data is in this format: [1] => Array ( [0] => 127654619178790249 [report_id] => 127654619178790249 [1] => 1 [user_id] => 1 [2] => 2010-12-31 19:43:24 [sent_on] => 2010-12-31 19:43:24 [3] => [fax_trans_id] => [4] => 1234567890 [fax_to_nums] => 1234567890 [5] => ' long html string here', [html_content] => 'long html string here', [6] => 'serialization_string_here', [serialized_values] => 'serialization_string_here', [7] => 70 [id] => 70 )

    Read the article

  • Need method to seek next/previous records id without cycling through all records.

    - by dqhendricks
    I am using MySQL and PHP. I have a MySQL blog post result set with id fields, and publish_date fields. I display one blog post per page, and the script knows which blog post to display based on $_GET['id'], which correlates to each blog entry's id field. I would like to reference them by id in the url, because I would like each blog post to have a perminant url. I would like to order the blog posts by publish date (descending). Now, on each page there will be next and previous links, which contain the $_GET['id'] value for the next and previous blog posts. How can I figure out what the id of the next and previous blog posts (determined by it's publish_date order) without cycling through each mysql result row? I can't mysql_data_seek(), because I do not know the row index of the current blog post id. I do not want to store a row index in a GET variable because the urls would no longer be perminant. I obviously cannot store the row index in a SESSION variable because then direct links to specific blog posts would have broken next and previous links. Any suggestions would be greatly appreciated.

    Read the article

  • Duplicate / Copy records in the same MySQL table

    - by Digits
    Hello, I have been looking for a while now but I can not find an easy solution for my problem. I would like to duplicate a record in a table, but of course, the unique primary key needs to be updated. I have this query: INSERT INTO invoices SELECT * FROM invoices AS iv WHERE iv.ID=XXXXX ON DUPLICATE KEY UPDATE ID = (SELECT MAX(ID)+1 FROM invoices) the proble mis that this just changes the ID of the row instead of copying the row. Does anybody know how to fix this ? Thank you verrry much, Digits //edit: I would like to do this without typing all the field names because the field names can change over time.

    Read the article

  • SQL COUNT records in table 2 JOINS away

    - by Fred K
    Using MySQL, I have three tables: projects: ID name 1 "birthday party" 2 "soccer match" 3 "wine tasting evening" 4 "dig out garden" 5 "mountainbiking" 6 "making music" batches: ID projectID templateID when 1 1 1 7 days before 2 1 1 1 day before 3 4 2 21 days before 4 4 1 7 days before 5 5 1 7 days before 6 3 5 7 days before 7 3 3 14 days before 8 5 1 14 days before templates: ID message 1 "Hi, I'd like to invite ..." 2 "Dear Sir, Madam, ..." 3 "Can you please ..." 4 "Would you like to ..." 5 "To all dear friends ..." 6 "Does any of you guys ..." I would like to display a table of templates and the number of projects they're used in. So, the result should be: templateID projectCount 1 3 2 1 3 1 4 0 5 1 6 0 I've tried all kinds of SQL queries using various JOINs, but I guess this is too complicated for me. Is it possible to get this result using a single SQL statement?

    Read the article

  • Delayed Jobs is not finding Records and failing..

    - by Trip
    In my app, delayed jobs isn't running automatically on my server anymore. It used to.. When I manually ssh in, and perform rake jobs:work I return this : * Starting job worker host:ip-(censored) pid:21458 * [Worker(host:ip-(censored) pid:21458)] acquired lock on PhotoJob * [JOB] host:ip-(censored) pid:21458 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9237 - 4 failed attempts This returns roughly 20 times over for what I think is several jobs. Then I get a few of these: [Worker(host:ip-(censored) pid:21458)] failed to acquire exclusive lock for PhotoJob And then finally one of these : 12 jobs processed at 73.6807 j/s, 12 failed ... Any ideas what I should be mulling over? Thanks so much!

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • Carrot (Python) [errno 10054] An existing connection was forcibly closed by the remote host

    - by Meditation
    Hi all, We are using Carrot in our Python project. I wrote a Python script acting as the consumer of the message queue. I invoked this Python script using command line shell in Windows 7 as python consumer.py However, after a while, the running session was aborted and the error is: [errno 10054] An existing connection was forcibly closed by the remote host The producer session is still running fine on the Linux server. Just wondering how can I fix this and have a long running consumer session on Windows Thanks in advance.

    Read the article

  • Creating design document from existing java code.

    - by BigBoss
    I have existing java code and need to create Design Document based on that. For starter even if I could get all functions with input / output parameters that will help in overall proces. Note: There is not commeted documentation on any procedures, function or classes. Last but not least. Let me know for any good tool which will reduce time required for this phase. As currently we write every flow and related stuffs.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >