Search Results

Search found 1503 results on 61 pages for 'timestamp'.

Page 14/61 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Manipulating existing XDocument (fails)

    - by Daemonfire3002nd
    Hi there, I got the following Code snippet from my Silverlight Application: var messages = from message in XcurrentMsg.Descendants("message") where DateTime.Parse(message.Attribute("timestamp").Value).CompareTo(DateTime.Parse(MessageCache.Last_Cached())) > 0 select new { ip = message.Attribute("ip").Value, timestamp = message.Attribute("timestamp").Value, text = message.Value, }; if (messages == null) throw new SystemException("No new messages recorded. Application tried to access non existing resources!"); foreach (var message in messages) { XElement temporaryElement = new XElement("message", message.text.ToString(), new XAttribute("ip", message.ip.ToString()), new XAttribute("timestamp", message.timestamp.ToString())); XcurrentMsg.Element("root").Element("messages").Add(temporaryElement); AddMessage(BuildMessage(message.ip, message.timestamp, message.text)); msgCount++; } MessageCache.CacheXML(XcurrentMsg); MessageCache.Refresh(); XcurrentMsg is a XDocument fetched from my server containing messages: Structure <root> <messages> <message ip="" timestamp=""> Text </message> </messages> </root> I want to get all "message" newer than the last time I cached the XcurrentMsg. This works fine as long as I cut out the "XElement temporaryElement" and the "XcurrentMsg.Element...." and simply use the currentMsg string as output. But I want to have "new messages" being saved in my XcurrentMsg / Cache. Now if I do not cut this part out, my Application gets awesome crazy. I think it writes infinite elements to the XcurrentMsg without stopping. I can not figure out what's the problem. regards,

    Read the article

  • How to use python and beautfulsoup to print timestamp/last updated time (from HTML:) for each row ?

    - by cesalo
    How to use python and beautfulsoup to print timestamp/last updated time (from HTML:) for each row ? thanks a lot ! A) 1) can i add the print a)date/time and b)last updated time after row ? a) date/time - display the time when execute the python code b) last updated time from HTML: HTML structure: td x 1 including two tables each table have few "tr" and within "tr" have few "td" data inside HTML: <td> <table width="100%" border="4" cellspacing="0" bordercolor="white" align="center"> <tbody> <tr> <td colspan="2" class="verd_black11">Last Updated: 18/08/2014 10:19</td> </tr> <tr> <td colspan="3" class="verd_black11">All data delayed at least 15 minutes</td> </tr> </tbody> </table> <table width="100%" border="4" cellspacing="0" bordercolor="white" align="center"> <tbody id="tbody"> <tr id="tr0" class="tableHdrB1" align="center"> <td align="centre">C Aug-14 - 15000</td> <td align="right"> - </td> <td align="right">5</td> <td align="right">9,904</td> </tr> </tbody> </table> </td> Code: import urllib2 from bs4 import BeautifulSoup contenturl = "HTML:" soup = BeautifulSoup(urllib2.urlopen(contenturl).read()) table = soup.find('tbody', attrs={'id': 'tbody'}) rows = table.findAll('tr') for tr in rows: cols = tr.findAll('td') for td in cols: t = td.find(text=True) if t: text = t + ';' print text, print Output from above code C Aug-14 - 15000 ; - ; 5 ; 9,904 Expected output: C Aug-14 - 15000 ; - ; 5 ; 9,904 ; 18/08/2014 ; 13:48:00 ; 18/08/2014 ; 10:19 (execute python code) (last updated time)

    Read the article

  • SQL use DISTINCT with ORDER BY (Oracle)

    - by ArneRie
    Hi, i have an strange problem. I want to select "timestamps" from an DB Table with Distinct and orderded by timestamp. ID TimeStamp -- --------- 1 123456789 2 123456789 3 333333333 4 334345643 In my PHP Script: $sql = "SELECT DISTINCT TIMESTAMP FROM TIMESTAMPS ORDER BY TIMESTAMP" When i use order by, the values are returned twice? Without order by the result is correct.. but not sorted. We are using Oracle 10g Any ideaS?

    Read the article

  • mysql join table - selecting the newest row

    - by cmancre
    Hi, I have the following two MySQL tables TABLE NAMES NAME_ID NAME 1 name1 2 name2 3 name3 TABLE STATUS STATUS_ID NAME_ID TIMESTAMP 1 1 2010-12-20 12:00 2 2 2010-12-20 10:00 3 3 2010-12-20 10:30 4 3 2010-12-20 14:00 I would like to select all info from table NAMES and add most recent correspondent TIMESTAMP column from table STATUS RESULT NAME_ID NAME TIMESTAMP 1 name1 2010-12-20 12:00 2 name2 2010-12-20 10:00 3 name3 2010-12-20 14:00 Am stuck on this one. How do I left join only on the newer timestamp?

    Read the article

  • Google App Engine - Secure Cookies

    - by tponthieux
    I'd been searching for a way to do cookie based authentication/sessions in Google App Engine because I don't like the idea of memcache based sessions, and I also don't like the idea of forcing users to create google accounts just to use a website. I stumbled across someone's posting that mentioned some signed cookie functions from the Tornado framework and it looks like what I need. What I have in mind is storing a user's id in a tamper proof cookie, and maybe using a decorator for the request handlers to test the authentication status of the user, and as a side benefit the user id will be available to the request handler for datastore work and such. The concept would be similar to forms authentication in ASP.NET. This code comes from the web.py module of the Tornado framework. According to the docstrings, it "Signs and timestamps a cookie so it cannot be forged" and "Returns the given signed cookie if it validates, or None." I've tried to use it in an App Engine Project, but I don't understand the nuances of trying to get these methods to work in the context of the request handler. Can someone show me the right way to do this without losing the functionality that the FriendFeed developers put into it? The set_secure_cookie, and get_secure_cookie portions are the most important, but it would be nice to be able to use the other methods as well. #!/usr/bin/env python import Cookie import base64 import time import hashlib import hmac import datetime import re import calendar import email.utils import logging def _utf8(s): if isinstance(s, unicode): return s.encode("utf-8") assert isinstance(s, str) return s def _unicode(s): if isinstance(s, str): try: return s.decode("utf-8") except UnicodeDecodeError: raise HTTPError(400, "Non-utf8 argument") assert isinstance(s, unicode) return s def _time_independent_equals(a, b): if len(a) != len(b): return False result = 0 for x, y in zip(a, b): result |= ord(x) ^ ord(y) return result == 0 def cookies(self): """A dictionary of Cookie.Morsel objects.""" if not hasattr(self,"_cookies"): self._cookies = Cookie.BaseCookie() if "Cookie" in self.request.headers: try: self._cookies.load(self.request.headers["Cookie"]) except: self.clear_all_cookies() return self._cookies def _cookie_signature(self,*parts): self.require_setting("cookie_secret","secure cookies") hash = hmac.new(self.application.settings["cookie_secret"], digestmod=hashlib.sha1) for part in parts:hash.update(part) return hash.hexdigest() def get_cookie(self,name,default=None): """Gets the value of the cookie with the given name,else default.""" if name in self.cookies: return self.cookies[name].value return default def set_cookie(self,name,value,domain=None,expires=None,path="/", expires_days=None): """Sets the given cookie name/value with the given options.""" name = _utf8(name) value = _utf8(value) if re.search(r"[\x00-\x20]",name + value): # Don't let us accidentally inject bad stuff raise ValueError("Invalid cookie %r:%r" % (name,value)) if not hasattr(self,"_new_cookies"): self._new_cookies = [] new_cookie = Cookie.BaseCookie() self._new_cookies.append(new_cookie) new_cookie[name] = value if domain: new_cookie[name]["domain"] = domain if expires_days is not None and not expires: expires = datetime.datetime.utcnow() + datetime.timedelta( days=expires_days) if expires: timestamp = calendar.timegm(expires.utctimetuple()) new_cookie[name]["expires"] = email.utils.formatdate( timestamp,localtime=False,usegmt=True) if path: new_cookie[name]["path"] = path def clear_cookie(self,name,path="/",domain=None): """Deletes the cookie with the given name.""" expires = datetime.datetime.utcnow() - datetime.timedelta(days=365) self.set_cookie(name,value="",path=path,expires=expires, domain=domain) def clear_all_cookies(self): """Deletes all the cookies the user sent with this request.""" for name in self.cookies.iterkeys(): self.clear_cookie(name) def set_secure_cookie(self,name,value,expires_days=30,**kwargs): """Signs and timestamps a cookie so it cannot be forged""" timestamp = str(int(time.time())) value = base64.b64encode(value) signature = self._cookie_signature(name,value,timestamp) value = "|".join([value,timestamp,signature]) self.set_cookie(name,value,expires_days=expires_days,**kwargs) def get_secure_cookie(self,name,include_name=True,value=None): """Returns the given signed cookie if it validates,or None""" if value is None:value = self.get_cookie(name) if not value:return None parts = value.split("|") if len(parts) != 3:return None if include_name: signature = self._cookie_signature(name,parts[0],parts[1]) else: signature = self._cookie_signature(parts[0],parts[1]) if not _time_independent_equals(parts[2],signature): logging.warning("Invalid cookie signature %r",value) return None timestamp = int(parts[1]) if timestamp < time.time() - 31 * 86400: logging.warning("Expired cookie %r",value) return None try: return base64.b64decode(parts[0]) except: return None uid=1234|1234567890|d32b9e9c67274fa062e2599fd659cc14 Parts: 1. uid is the name of the key 2. 1234 is your value in clear 3. 1234567890 is the timestamp 4. d32b9e9c67274fa062e2599fd659cc14 is the signature made from the value and the timestamp

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • MySQL JDBC date issues with database server in different timezone

    - by Somatik
    I have a database server in "Europe/London" time zone and my web server in "Europe/Brussels". Since it is summer time now my application server has a 2 hour difference. I created a test to reproduce my issue: Query q = JPA.em().createNativeQuery("SELECT UNIX_TIMESTAMP(startDateTime) FROM `Event` WHERE `id` =574"); BigInteger unix = (BigInteger) q.getSingleResult(); System.out.println(unix + "000 UNIX_TIMESTAMP to BigInteger"); Query q2 = JPA.em().createNativeQuery("SELECT startDateTime FROM `Event` WHERE `id` =574"); Timestamp o = (Timestamp) q2.getSingleResult(); System.out.println(o.getTime() + " Timestamp"); The startDateTime column is defined as 'datetime' (but same issue with 'timestamp') The output I am getting is this: 1340291591000 UNIX_TIMESTAMP to BigInteger 1340284391000 Timestamp Reading java date objects results in a shift in time zone, how do I fix this? I would expect the jdbc driver to just set the "unix time" value it gets from the server in the Date object. (a proper solution should work with any timezone combination, not only for db in GMT)

    Read the article

  • how to add column in SQL Query that incl. LEFT OUTER JOIN

    - by radbyx
    I have this Query: SELECT p.ProductName, dt.MaxTimeStamp, p.Responsible FROM Product p LEFT JOIN (SELECT ProductID, MAX(TimeStamp) AS MaxTimeStamp FROM StateLog WHERE State = 0 GROUP BY ProductID, Status) dt ON p.ProductID = dt.ProductID ORDER BY p.ProductName; It works like it should, but now I need to SELECT "State" out too. The tricky part is, that I only want the lastest "TimeStamp" where "State" was false. But now I also need the "State" for the lastest "TimeStamp". I tried this: SELECT p.ProductName, dt.State, dt.MaxTimeStamp, p.Responsible FROM Product p LEFT JOIN (SELECT ProductID, MAX(TimeStamp) AS MaxTimeStamp, State FROM StateLog WHERE State = 0 GROUP BY ProductID, Status) dt ON p.ProductID =dt.ProductID ORDER BY p.ProductName; But it didn't work, because it gave me the "State" for the lastest "TimeStamp". So I hope there is some clever heads out there that can help me. I'm guessing that this is either very simple or very hard to solve.

    Read the article

  • How does my php function for converting a date to facebook timestamp look? New to PHP.

    - by First2Drown
    any suggestions to make it better? function convertToFBTimestamp($date){ $this_date = date('Y-m-d-H-i-s', strtotime($date)); $cur_date = date('Y-m-d-H-i-s'); list ($this_year, $this_month, $this_day, $this_hour, $this_min, $this_sec) = explode('-',$this_date); list ($cur_year, $cur_month, $cur_day, $cur_hour, $cur_min, $cur_sec) = explode('-',$cur_date); $this_unix_time = mktime($this_hour, $this_min, $this_sec, $this_month, $this_day, $this_year); $cur_unix_time = mktime($cur_hour, $cur_min, $cur_sec, $cur_month, $cur_day, $cur_year); $cur_unix_date = mktime(0, 0, 0, $cur_month, $cur_day, $cur_year); $dif_in_sec = $cur_unix_time - $this_unix_time; $dif_in_min = (int)($dif_in_sec / 60); $dif_in_hours = (int)($dif_in_min / 60); if(date('Y-m-d',strtotime($date))== date('Y-m-d')) { if($dif_in_sec < 60) { return $dif_in_sec." seconds ago"; } elseif($dif_in_sec < 120) { return "about a minute ago"; } elseif($dif_in_min < 60) { return $dif_in_min." minutes ago"; } else { if($dif_in_hours == 1) { return $dif_in_hours." hour ago"; } else { return $dif_in_hours." hours ago"; } } } elseif($cur_unix_date - $this_unix_time < 86400 ) { return strftime("Yesterday at %l:%M%P",$this_unix_time); } elseif($cur_unix_date - $this_unix_time < 259200) { return strftime("%A at %l:%M%P",$this_unix_time); } else { if($this_year == $cur_year) { return strftime("%B, %e at %l:%M%P",$this_unix_time); } else { return strftime("%B, %e %Y at %l:%M%P",$this_unix_time); } } }

    Read the article

  • Converting a date string which is before 1970 into a timestamp in MySQL.

    - by Jamie
    Not a very good title, so my apologies. For some reason, (i wasn't the person who did it, i digress) we have a table structure where the field type for a date is varchar. (odd). We have some dates, such as: 1932-04-01 00:00:00 and 1929-07-04 00:00:00 I need to do a query which will convert these date strings into a unix time stamp, however, in my sql if you convert a date which is before 1970 it will return 0. Any ideas? Thanks so much! EDIT: Wrong date format. ooops.

    Read the article

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • 'Invalid column name [ColumnName]' on a nested linq query.

    - by Joe
    I've got the following query: ATable .GroupBy(x=> new {FieldA = x.FieldAID, FieldB = x.FieldBID, FieldC = x.FieldCID}) .Select(x=>new {FieldA = x.Key.FieldA, ..., last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault().Timestamp}) results in: SqlException: Invalid column name 'FieldAID' x 5 SqlException: Invalid column name 'FieldBID' x 5 SqlException: Invalid column name 'FieldCID' x 1 I've worked out it has to do with the last query to Timestamp because this works: ATable .GroupBy(x=> new {FieldA = x.FieldAID, FieldB = x.FieldBID, FieldC = x.FieldCID}) .Select(x=>new {FieldA = x.Key.FieldA, ..., last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault()}) The query has been simplified. The purpose is to group by a set of variables and then show the last time this grouping occured in the db. I'm using Linqpad 4 to generate these results so the Timestamp gives me a string whereas FirstOrDefault gives me the whole object which isn't ideal. Update On further testing I've noticed that the number and type of SQLException is related to the class created in the groupby clause. So, ATable .GroupBy(x=> new {FieldA = x.FieldAID}) .Select(x=>new {FieldA = x.Key.FieldA, last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault()}) results in SqlException: Invalid column name 'FieldAID' x 5

    Read the article

  • java regex: capture multiline sequence between tokens

    - by Guillaume
    I'm struggling with regex for splitting logs files into log sequence in order to match pattern inside these sequences. log format is: timestamp fieldA fieldB fieldn log message1 timestamp fieldA fieldB fieldn log message2 log message2bis timestamp fieldA fieldB fieldn log message3 The timestamp regex is known. I want to extract every log sequence (potentialy multiline) between timestamps. And I want to keep the timestamp. I want in the same time to keep the exact count of lines. What I need is how to decorate timestamp pattern to make it split my log file in log sequence. I can not split the whole file as a String, since the file content is provided in a CharBuffer Here is sample method that will be using this log sequence matcher: private void matches(File f, CharBuffer cb) { Matcher sequenceBreak = sequencePattern.matcher(cb); // sequence matcher int lines = 1; int sequences = 0; while (sequenceBreak.find()) { sequences++; String sequence = sequenceBreak.group(); if (filter.accept(sequence)) { System.out.println(f + ":" + lines + ":" + sequence); } //count lines Matcher lineBreak = LINE_PATTERN.matcher(sequence); while (lineBreak.find()) { lines++; } if (sequenceBreak.end() == cb.limit()) { break; } } }

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • Cassandra Batch_insert example in C#.net

    - by Sandeep
    Can any one please give me an example on how to work on Cassandra batch_insert in C# thrift client? If possible please let me know where am I going wrong in the following code. Dictionary dictionary = new Dictionary(); Dictionary subColumns = new Dictionary(); List listOfMutations = new List(); listOfMutations.Add(new Mutation() { Column_or_supercolumn = new ColumnOrSuperColumn() { Column = new Column() { Name = utf8Encoding.GetBytes("AA"), Value = utf8Encoding.GetBytes("Answer Automation"), Timestamp = timeStamp } } }); listOfMutations.Add(new Mutation() { Column_or_supercolumn = new ColumnOrSuperColumn() { Column = new Column() { Name = utf8Encoding.GetBytes("CT"), Value = utf8Encoding.GetBytes("Call Tracker"), Timestamp = timeStamp } } }); listOfMutations.Add( new Mutation() { Column_or_supercolumn = new ColumnOrSuperColumn() { Column = new Column() { Name = utf8Encoding.GetBytes("TL"), Value = utf8Encoding.GetBytes("Track That Lead"), Timestamp = timeStamp } } }); SuperColumn superColumn = new SuperColumn() { Name=utf8Encoding.GetBytes("Indatus") }; subColumns.Add("Super1", listOfMutations); dictionary.Add("Indatus", subColumns); client.batch_mutate("Keyspace1",dictionary, ConsistencyLevel.ONE); I understand that SuperColumn struct expects List but I does not have a list of Columns. Rather I have List. Thanks in Advance.

    Read the article

  • how to use a parameterized function for the Default Binding of a Sql Server column

    - by Walt Gaber
    I have a table that catalogs selected files from multiple sources. I want to record whether a file is a duplicate of a previously cataloged file at the time the new file is cataloged. I have a column in my table (“primary_duplicate”) to record each entry as ‘P’ (primary) or ‘D’ (duplicate). I would like to provide a Default Binding for this column that would check for other occurrences of this file (i.e. name, length, timestamp) at the time the new file is being recorded. I have created a function that performs this check (see “GetPrimaryDuplicate” below). But I don’t know how to bind this function which requires three parameters to the table’s “primary_duplicate” column as its Default Binding. I would like to avoid using a trigger. I currently have a stored procedure used to insert new records that performs this check. But I would like to ensure that the flag is set correctly if an insert is performed outside of this stored procedure. How can I call this function with values from the row that is being inserted? USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[FileCatalog]( [id] [uniqueidentifier] NOT NULL, [catalog_timestamp] [datetime] NOT NULL, [primary_duplicate] nchar NOT NULL, [name] nvarchar NULL, [length] [bigint] NULL, [timestamp] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_id] DEFAULT (newid()) FOR [id] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_catalog_timestamp] DEFAULT (getdate()) FOR [catalog_timestamp] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_primary_duplicate] DEFAULT (N'GetPrimaryDuplicate(name, length, timestamp)') FOR [primary_duplicate] GO USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[GetPrimaryDuplicate] ( @name nvarchar(255), @length bigint, @timestamp datetime ) RETURNS nchar(1) AS BEGIN DECLARE @c int SELECT @c = COUNT(*) FROM FileCatalog WHERE name=@name and length=@length and timestamp=@timestamp and primary_duplicate = 'P' IF @c > 0 RETURN 'D' -- Duplicate RETURN 'P' -- Primary END GO

    Read the article

  • PHP date returning wrong time

    - by gargantaun
    The following script is returning the wrong time after I call date_default_timezone_set("UTC") <?PHP $timestamp = time(); echo "<p>Timestamp: $timestamp</p>"; // This returns the correct time echo "<p>". date("Y-m-d H:i:s", $timestamp) ."</p>"; echo "<p>Now I call 'date_default_timezone_set(\"UTC\")' and echo out the same timestamp.</p>"; echo "Set timezone = " . date_default_timezone_set("UTC"); // This returns a time 5 hours in the past echo "<p>". date("Y-m-d H:i:s", $timestamp) ."</p>"; ?> The timezone on the server is BST. So what should happen is that the second call to 'date' should return a time 1 hour behind the first call. It's actually returning a time 5 hours behind the first one. I should note that the server was originally set up with the EDT timezone (UTC -4). That was changed to BST (UTC +1) and the server was restarted. I can't figure out if this is a PHP problem or a problem with the server.

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • XSLT: For each node transform, if A =2 and A=1 were both found do this else do that

    - by Larry
    Example 1: <time> <timestamp>01:00</timestamp> <event>arrived<event> </time> <time> <timestamp>02:00</timestamp> <event>left<event> </time> Example 2: <time> <timestamp>02:00</timestamp> <event>left<event> </time> The XSLT needs to do: FOR EACH node DO: IF event=arrived THEN set eventtype=atdestination IF event=left is found AND event=arrived is found THEN set new node type=leftdestination ELSE set type=left XSLT applied to example 1: <event> <time>01:00</time> <type>atdestination</type> <event> <event> <time>02:00</time> <type>leftdestination</type> <event> XSLT applied to example 2: <event> <time>02:00</time> <type>left</type> <event>

    Read the article

  • Codeigniter or PHP Amazon API help

    - by faya
    Hello, I have a problem searching through amazon web servise using PHP in my CodeIgniter. I get InvalidParameter timestamp is not in ISO-8601 format response from the server. But I don't think that timestamp is the problem,because I have tryed to compare with given date format from http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html and it seems its fine. Could anyone help? Here is my code: $private_key = 'XXXXXXXXXXXXXXXX'; // Took out real secret key $method = "GET"; $host = "ecs.amazonaws.com"; $uri = "/onca/xml"; $timeStamp = gmdate("Y-m-d\TH:i:s.000\Z"); $timeStamp = str_replace(":", "%3A", $timeStamp); $params["AWSAccesskeyId"] = "XXXXXXXXXXXX"; // Took out real access key $params["ItemPage"] = $item_page; $params["Keywords"] = $keywords; $params["ResponseGroup"] = "Medium2%2525COffers"; $params["SearchIndex"] = "Books"; $params["Operation"] = "ItemSearch"; $params["Service"] = "AWSECommerceService"; $params["Timestamp"] = $timeStamp; $params["Version"] = "2009-03-31"; ksort($params); $canonicalized_query = array(); foreach ($params as $param=>$value) { $param = str_replace("%7E", "~", rawurlencode($param)); $value = str_replace("%7E", "~", rawurlencode($value)); $canonicalized_query[] = $param. "=". $value; } $canonicalized_query = implode("&", $canonicalized_query); $string_to_sign = $method."\n\r".$host."\n\r".$uri."\n\r".$canonicalized_query; $signature = base64_encode(hash_hmac("sha256",$string_to_sign, $private_key, True)); $signature = str_replace("%7E", "~", rawurlencode($signature)); $request = "http://".$host.$uri."?".$canonicalized_query."&Signature=".$signature; $response = @file_get_contents($request); if ($response === False) { return "response fail"; } else { $parsed_xml = simplexml_load_string($response); if ($parsed_xml === False) { return "parse fail"; } else { return $parsed_xml; } } P.S. - Personally I think that something is wrong in the generation of the from the $string_to_sign when hashing it.

    Read the article

  • How do I add a column that displays the number of distinct rows to this query?

    - by Fake Code Monkey Rashid
    Hello good people! I don't know how to ask my question clearly so I'll just show you the money. To start with, here's a sample table: CREATE TABLE sandbox ( id integer NOT NULL, callsign text NOT NULL, this text NOT NULL, that text NOT NULL, "timestamp" timestamp with time zone DEFAULT now() NOT NULL ); CREATE SEQUENCE sandbox_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER SEQUENCE sandbox_id_seq OWNED BY sandbox.id; SELECT pg_catalog.setval('sandbox_id_seq', 14, true); ALTER TABLE sandbox ALTER COLUMN id SET DEFAULT nextval('sandbox_id_seq'::regclass); INSERT INTO sandbox VALUES (1, 'alpha', 'foo', 'qux', '2010-12-29 16:51:09.897579+00'); INSERT INTO sandbox VALUES (2, 'alpha', 'foo', 'qux', '2010-12-29 16:51:36.108867+00'); INSERT INTO sandbox VALUES (3, 'bravo', 'bar', 'quxx', '2010-12-29 16:52:36.370507+00'); INSERT INTO sandbox VALUES (4, 'bravo', 'foo', 'quxx', '2010-12-29 16:52:47.584663+00'); INSERT INTO sandbox VALUES (5, 'charlie', 'foo', 'corge', '2010-12-29 16:53:00.742356+00'); INSERT INTO sandbox VALUES (6, 'delta', 'foo', 'qux', '2010-12-29 16:53:10.884721+00'); INSERT INTO sandbox VALUES (7, 'alpha', 'foo', 'corge', '2010-12-29 16:53:21.242904+00'); INSERT INTO sandbox VALUES (8, 'alpha', 'bar', 'corge', '2010-12-29 16:54:33.318907+00'); INSERT INTO sandbox VALUES (9, 'alpha', 'baz', 'quxx', '2010-12-29 16:54:38.727095+00'); INSERT INTO sandbox VALUES (10, 'alpha', 'bar', 'qux', '2010-12-29 16:54:46.237294+00'); INSERT INTO sandbox VALUES (11, 'alpha', 'baz', 'qux', '2010-12-29 16:54:53.891606+00'); INSERT INTO sandbox VALUES (12, 'alpha', 'baz', 'corge', '2010-12-29 16:55:39.596076+00'); INSERT INTO sandbox VALUES (13, 'alpha', 'baz', 'corge', '2010-12-29 16:55:44.834019+00'); INSERT INTO sandbox VALUES (14, 'alpha', 'foo', 'qux', '2010-12-29 16:55:52.848792+00'); ALTER TABLE ONLY sandbox ADD CONSTRAINT sandbox_pkey PRIMARY KEY (id); Here's the current SQL query I have: SELECT * FROM ( SELECT DISTINCT ON (this, that) id, this, that, timestamp FROM sandbox WHERE callsign = 'alpha' AND CAST(timestamp AS date) = '2010-12-29' ) playground ORDER BY timestamp DESC This is the result it gives me: id this that timestamp ----------------------------------------------------- 14 foo qux 2010-12-29 16:55:52.848792+00 13 baz corge 2010-12-29 16:55:44.834019+00 11 baz qux 2010-12-29 16:54:53.891606+00 10 bar qux 2010-12-29 16:54:46.237294+00 9 baz quxx 2010-12-29 16:54:38.727095+00 8 bar corge 2010-12-29 16:54:33.318907+00 7 foo corge 2010-12-29 16:53:21.242904+00 This is what I want to see: id this that timestamp count ------------------------------------------------------------- 14 foo qux 2010-12-29 16:55:52.848792+00 3 13 baz corge 2010-12-29 16:55:44.834019+00 2 11 baz qux 2010-12-29 16:54:53.891606+00 1 10 bar qux 2010-12-29 16:54:46.237294+00 1 9 baz quxx 2010-12-29 16:54:38.727095+00 1 8 bar corge 2010-12-29 16:54:33.318907+00 1 7 foo corge 2010-12-29 16:53:21.242904+00 1 EDIT: I'm using PostgreSQL 9.0.* (if that helps any).

    Read the article

  • Postgres Stored procedure using iBatis

    - by Om Yadav
    --- The error occurred while applying a parameter map. --- Check the newSubs-InlineParameterMap. --- Check the statement (query failed). --- Cause: org.postgresql.util.PSQLException: ERROR: wrong record type supplied in RETURN NEXT Where: PL/pgSQL function "getnewsubs" line 34 at return next the function detail is as below.... CREATE OR REPLACE FUNCTION getnewsubs(timestamp without time zone, timestamp without time zone, integer) RETURNS SETOF record AS $BODY$declare v_fromdt alias for $1; v_todt alias for $2; v_domno alias for $3; v_cursor refcursor; v_rec record; v_cpno bigint; v_actno int; v_actname varchar(50); v_actid varchar(100); v_cpntypeid varchar(100); v_mrp double precision; v_domname varchar(100); v_usedt timestamp without time zone; v_expirydt timestamp without time zone; v_createdt timestamp without time zone; v_ctno int; v_phone varchar; begin open v_cursor for select cpno,c.actno,usedt from cpnusage c inner join account s on s.actno=c.actno where usedt = $1 and usedt < $2 and validdomstat(s.domno,v_domno) order by c.usedt; fetch v_cursor into v_cpno,v_actno,v_usedt; while found loop if isactivation(v_cpno,v_actno,v_usedt) IS TRUE then select into v_actno,v_actname,v_actid,v_cpntypeid,v_mrp,v_domname,v_ctno,v_cpno,v_usedt,v_expirydt,v_createdt,v_phone a.actno,a.actname as name,a.actid as actid,c.descr as cpntypeid,l.mrp as mrp,s.domname as domname,c.ctno as ctno,b.cpno,b.usedt,b.expirydt,d.createdt,a.phone from account a inner join cpnusage b on a.actno=b.actno inner join cpn d on b.cpno=d.cpno inner join cpntype c on d.ctno=c.ctno inner join ssgdom s on a.domno=s.domno left join price_class l ON l.price_class_id=b.price_class_id where validdomstat(a.domno,v_domno) and b.cpno=v_cpno and b.actno=v_actno; select into v_rec v_actno,v_actname,v_actid,v_cpntypeid,v_mrp,v_domname,v_ctno,v_cpno,v_usedt,v_expirydt,v_createdt,v_phone; return next v_rec; end if; fetch v_cursor into v_cpno,v_actno,v_usedt; end loop; return ; end;$BODY$ LANGUAGE 'plpgsql' VOLATILE; ALTER FUNCTION getnewsubs(timestamp without time zone, timestamp without time zone, integer) OWNER TO radius If i am running the function from the console it is running fine and giving me the correct response. But when using through java causing the above error. Can ay body help in it..Its very urgent. Please response as soon as possible. Thanks in advance.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >