Search Results

Search found 1480 results on 60 pages for 'jav 000'.

Page 4/60 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • WWDC : Apple dévoile iOS 8 avec son SDK qui introduit plus de 4 000 nouvelles API, la version bêta disponible pour les développeurs

    WWDC : Apple dévoile iOS 8 avec son SDK qui introduit plus de 4 000 nouvelles API la version bêta disponible pour les développeursLe WWDC bat son plein. Apple a présenté lors de son événement dédié aux développeurs la prochaine version majeure de son système d'exploitation pour mobile iOS.iOS 8 introduit un nombre important de nouvelles fonctionnalités et améliorations pour séduire encore plus les fans des iDevices qui reposent sous le système d'exploitation.Son centre de notifications a été revu...

    Read the article

  • ignoring informational payload, type INVALID_COOKIE msgid=00000000

    - by user197279
    I'm configuring a site-to-site vpn between openswan ipsec and cisco asa 5540. After the step, i started ipesc service but the error i'm seeing is: ignoring informational payload, type INVALID_COOKIE msgid=00000000 Nov 5 09:42:30 pluto[11712]: "myVPN" #1: received and ignored informational message Nov 5 09:42:51 pluto[11712]: "myVPN" #1: ignoring informational payload, type INVALID_COOKIE msgid=00000000 Nov 5 09:42:51 pluto[11712]: "myVPN" #1: received and ignored informational message Nov 5 09:43:30 pluto[11712]: "myVPN" #1: max number of retransmissions (2) reached STATE_MAIN_I2 Nov 5 09:43:30 pluto[11712]: "myVPN" #1: starting keying attempt 2 of at most 3 Any advise why I'm getting this error on openswan? Also sudo ipsec whack --status gives: "myVPN": 10.0.xx.0/24===10.0.7x.x[54.209.y.yyy,+S=C]---10.0.xx.x...10.0.70.x---41.22x.4.xx<41.22x.4.xx[+S=C]===41.22y.4.yyy/32; unrouted; eroute owner: #0 000 "myVPN": myip=54.209.zz.zz; hisip=unset; 000 "myVPN": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 3 000 "myVPN": policy: PSK+ENCRYPT+TUNNEL+DONTREKEY+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,32; interface: eth0; 000 "myVPN": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 "myVPN": IKE algorithms wanted: AES_CBC(7)_256-SHA1(2)_000-MODP1024(2); flags=-strict 000 "myVPN": IKE algorithms found: AES_CBC(7)_256-SHA1(2)_160-MODP1024(2) 000 "myVPN": ESP algorithms wanted: AES(12)_256-SHA1(2)_000; flags=-strict 000 "myVPN": ESP algorithms loaded: AES(12)_256-SHA1(2)_160 000 000 #5: "myVPN":500 STATE_MAIN_I2 (sent MI2, expecting MR2); EVENT_RETRANSMIT in 8s; nodpd; idle; import:admin initiate 000 #5: pending Phase 2 for "myVPN" replacing #0 Thanks.

    Read the article

  • Insert Data from to a table

    - by Lee_McIntosh
    I have a table that lists number of comments from a particular site like the following: Date Site Comments Total --------------------------------------------------------------- 2010-04-01 00:00:00.000 1 5 5 2010-04-01 00:00:00.000 2 8 13 2010-04-01 00:00:00.000 4 2 7 2010-04-01 00:00:00.000 7 13 13 2010-04-01 00:00:00.000 9 1 2 I have another table that lists ALL sites for example from 1 to 10 Site ----- 1 2 ... 9 10 Using the following code i can find out which sites are missing entries for the previous month: SELECT s.site from tbl_Sites s EXCEPT SELECT c.site from tbl_Comments c WHERE c.[Date] = DATEADD(mm, DATEDIFF(mm, 0, GetDate()) -1,0) Producing: site ----- 3 5 6 8 10 I would like to be able to insert the missing sites that is listed from my query into the comments table with some default values, i.e '0's Date Site Comments Total --------------------------------------------------------------- 2010-04-01 00:00:00.000 3 0 0 2010-04-01 00:00:00.000 5 0 0 2010-04-01 00:00:00.000 6 0 0 2010-04-01 00:00:00.000 8 0 0 2010-04-01 00:00:00.000 10 0 0 the question is, how did i update/insert the table/values? cheers, Lee

    Read the article

  • CTE Join query issues

    - by Lee_McIntosh
    Hi everyone, this problem has me head going round in circles at the moment and i wondering if anyone could give any pointers as to where im going wrong. Im trying to produce a SPROC that produces a dataset to be called by SSRS for graphs spanning the last 6 months. The data for example purposes uses three tables (theres more but the it wont change the issue at hand) and are as follows: tbl_ReportList: Report Site ---------------- North abc North def East bbb East ccc East ddd South poa South pob South poc South pod West xyz tbl_TicketsRaisedThisMonth: Date Site Type NoOfTickets --------------------------------------------------------- 2010-07-01 00:00:00.000 abc Support 101 2010-07-01 00:00:00.000 abc Complaint 21 2010-07-01 00:00:00.000 def Support 6 ... 2010-12-01 00:00:00.000 abc Support 93 2010-12-01 00:00:00.000 xyz Support 5 tbl_FeedBackRequests: Date Site NoOfFeedBackR ---------------------------------------------------------------- 2010-07-01 00:00:00.000 abc 101 2010-07-01 00:00:00.000 def 11 ... 2010-12-01 00:00:00.000 abc 63 2010-12-01 00:00:00.000 xyz 4 I'm using CTE's to simplify the code, which is as follows: DECLARE @ReportName VarChar(200) SET @ReportName = 'North'; WITH TicketsRaisedThisMonth AS ( SELECT [Date], Site, SUM(NoOfTickets) AS NoOfTickets FROM tbl_TicketsRaisedThisMonth WHERE [Date] >= DATEADD(mm, DATEDIFF(m,0,GETDATE())-6,0) GROUP BY [Date], Site ), FeedBackRequests AS ( SELECT [Date], Site, SUM(NoOfFeedBackR) AS NoOfFeedBackR FROM tbl_FeedBackRequests WHERE [Date] >= DATEADD(mm, DATEDIFF(m,0,GETDATE())-6,0) GROUP BY [Date], Site ), SELECT trtm.[Date] SUM(trtm.NoOfTickets) AS NoOfTickets, SUM(fbr.NoOfFeedBackR) AS NoOfFeedBackR, FROM Reports rpts LEFT OUTER JOIN TotalIncidentsDuringMonth trtm ON rpts.Site = trtm.Site LEFT OUTER JOIN LoggedComplaints fbr ON rpts.Site = fbr.Site WHERE rpts.report = @ReportName GROUP BY trtm.[Date] And the output when the sproc is pass a parameter such as 'North' to be as follows: Date NoOfTickets NoOfFeedBackR ----------------------------------------------------------------------------------- 2010-07-01 00:00:00.000 128 112 2010-08-01 00:00:00.000 <data for that month> <data for that month> 2010-09-01 00:00:00.000 <data for that month> <data for that month> 2010-10-01 00:00:00.000 <data for that month> <data for that month> 2010-11-01 00:00:00.000 <data for that month> <data for that month> 2010-12-01 00:00:00.000 122 63 The issue I'm having is that when i execute the query I'm given a repeated list of values of each month, such as 128 will repeat 6 times then another value for the next months value repeated 6 times, etc. argh!

    Read the article

  • How can I convert floating point values in text to binary using Perl?

    - by YoDar
    I have text file looks like that: float a[10] = { 7.100000e+000 , 9.100000e+000 , 2.100000e+000 , 1.100000e+000 , 8.200000e+000 , 7.220000e+000 , 7.220000e+000 , 7.222000e+000 , 1.120000e+000 , 1.987600e+000 }; unsigned int col_ind[10] = { 1 , 4 , 3 , 4 , 5 , 2 , 3 , 4 , 1 , 5 }; Now, I want to convert each array (float / unsigned int) to different binary files - big endian type. Binary file for all float values and binary file for all integer values. What is the simple way to do it in Perl, consider I have over 2 millon elements in each array?

    Read the article

  • How to output list of float text to binary file in Perl ?

    - by YoDar
    Hi, I have text file looks like that: float a[10] = { 7.100000e+000 , 9.100000e+000 , 2.100000e+000 , 1.100000e+000 , 8.200000e+000 , 7.220000e+000 , 7.220000e+000 , 7.222000e+000 , 1.120000e+000 , 1.987600e+000 }; unsigned int col_ind[10] = { 1 , 4 , 3 , 4 , 5 , 2 , 3 , 4 , 1 , 5 }; Now, I want to convert each array (float / unsigned int) to different binary files - big endian type. Binary file for all float values and binary file for all integer values. What is the simple way to do it in Perl, consider I have over 2 millon elements in each array? Thanks, Yodar.

    Read the article

  • Tsql to find the start and end date(set based)

    - by priyanka.sarkar_2
    I have the below Name Date A 2011-01-01 01:00:00.000 A 2011-02-01 02:00:00.000 A 2011-03-01 03:00:00.000 B 2011-04-01 04:00:00.000 A 2011-05-01 07:00:00.000 The desired output being Name StartDate EndDate ------------------------------------------------------------------- A 2011-01-01 01:00:00.000 2011-04-01 04:00:00.000 B 2011-04-01 04:00:00.000 2011-05-01 07:00:00.000 A 2011-05-01 07:00:00.000 NULL How to achieve the same using TSQL in Set based approach DDL is as under DECLARE @t TABLE(PersonName VARCHAR(32), [Date] DATETIME) INSERT INTO @t VALUES('A', '2011-01-01 01:00:00') INSERT INTO @t VALUES('A', '2011-01-02 02:00:00') INSERT INTO @t VALUES('A', '2011-01-03 03:00:00') INSERT INTO @t VALUES('B', '2011-01-04 04:00:00') INSERT INTO @t VALUES('A', '2011-01-05 07:00:00') Select * from @t

    Read the article

  • Windows Phone : 23 % de possesseurs d'un smartphone sous l'OS proviennent d'Android, sa galerie franchit le cap des 160 000 applications

    Windows Phone : 23 % des possesseurs d'un smartphone sous l'OS proviennent d'Android sa galerie franchit le cap des 160 000 applicationsWindows Phone se porte bien. C'est le message qu'a voulu passer Microsoft lors d'une session destinée aux développeurs d'applications mobiles pendant la conférence Build.L'éditeur rassure et se réjouit : Windows Phone est désormais le troisième OS mondial devant BlackBerry, qui aurait perdu sa place en mai dernier. Une position que la société est persuadée de maintenir. « Je ne pense pas qu'ils (N.D.L.R. : BlackBerry) puissent mettre sur la table quelques-unes des choses que nous avons faites pour être disponibles sur plusieurs segments », se félicite Larry...

    Read the article

  • ActiveMQ - "Cannot send, channel has already failed" every 2 seconds?

    - by quanta
    ActiveMQ 5.7.0 In the activemq.log, I'm seeing this exception every 2 seconds: 2013-11-05 13:00:52,374 | DEBUG | Transport Connection to: tcp://127.0.0.1:37501 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Due to this keyword InactivityIOException, the first thing comes to my mind is InactivityMonitor, but the strange thing is MaxInactivityDuration=30000: 2013-11-05 13:11:02,672 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-2 Moreover, I also didn't see something like this: No message received since last read check for ... or: Channel was inactive for too (30000) long Do a netstat, I see these connections in TIME_WAIT state: tcp 0 0 127.0.0.1:38545 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38544 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38522 127.0.0.1:61616 TIME_WAIT - Here're the output when running tcpdump: Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1) Version: 4 Header length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) 0000 00.. = Differentiated Services Codepoint: Default (0x00) .... ..00 = Explicit Congestion Notification: Not-ECT (Not ECN-Capable Transport) (0x00) Total Length: 296 Identification: 0x7b6a (31594) Flags: 0x02 (Don't Fragment) 0... .... = Reserved bit: Not set .1.. .... = Don't fragment: Set ..0. .... = More fragments: Not set Fragment offset: 0 Time to live: 64 Protocol: TCP (6) Header checksum: 0xc063 [correct] [Good: True] [Bad: False] Source: 127.0.0.1 (127.0.0.1) Destination: 127.0.0.1 (127.0.0.1) Transmission Control Protocol, Src Port: 61616 (61616), Dst Port: 54669 (54669), Seq: 1, Ack: 2, Len: 244 Source port: 61616 (61616) Destination port: 54669 (54669) [Stream index: 11] Sequence number: 1 (relative sequence number) [Next sequence number: 245 (relative sequence number)] Acknowledgement number: 2 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) 000. .... .... = Reserved: Not set ...0 .... .... = Nonce: Not set .... 0... .... = Congestion Window Reduced (CWR): Not set .... .0.. .... = ECN-Echo: Not set .... ..0. .... = Urgent: Not set .... ...1 .... = Acknowledgement: Set .... .... 1... = Push: Set .... .... .0.. = Reset: Not set .... .... ..0. = Syn: Not set .... .... ...0 = Fin: Not set Window size value: 256 [Calculated window size: 32768] [Window size scaling factor: 128] Checksum: 0xff1c [validation disabled] [Good Checksum: False] [Bad Checksum: False] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2304161892, TSecr 2304161891 Kind: Timestamp (8) Length: 10 Timestamp value: 2304161892 Timestamp echo reply: 2304161891 [SEQ/ACK analysis] [Bytes in flight: 244] Constrained Application Protocol, TID: 240, Length: 244 00.. .... = Version: 0 ..00 .... = Type: Confirmable (0) .... 0000 = Option Count: 0 Code: Unknown (0) Transaction ID: 240 Payload Content-Type: text/plain (default), Length: 240, offset: 4 Line-based text data: text/plain [truncated] \001ActiveMQ\000\000\000\t\001\000\000\000<DE>\000\000\000\t\000\fMaxFrameSize\006\177<FF><FF><FF><FF> <FF><FF><FF>\000\tCacheSize\005\000\000\004\000\000\fCacheEnabled\001\001\000\022SizePrefixDisabled\001\000\000 MaxInactivityDurationInitalDelay\006\ It is very likely a tcp port check. This is what I see when trying telnet from another host: 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: java.io.EOFException | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: tcp:///10.8.20.9:46775@61616 java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:275) at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:229) at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:221) at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:204) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Unregistering MBean org.apache.activemq:BrokerName=localhost,Type=Connection,ConnectorName=ope nwire,ViewType=address,Name=tcp_//10.8.20.9_46775 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:/ //10.8.20.9:46775@61616 2013-11-05 16:12:41,073 | DEBUG | Stopping connection: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Stopping transport tcp:///10.8.20.9:46775@61616 | org.apache.activemq.transport.tcp.TcpTranspo rt | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Initialized TaskRunnerFactory[ActiveMQ Task] using ExecutorService: java.util.concurrent.Threa dPoolExecutor@23cc2a28 | org.apache.activemq.thread.TaskRunnerFactory | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Closed socket Socket[addr=/10.8.20.9,port=46775,localport=61616] | org.apache.activemq.transpo rt.tcp.TcpTransport | ActiveMQ Task-1 2013-11-05 16:12:41,074 | DEBUG | Forcing shutdown of ExecutorService: java.util.concurrent.ThreadPoolExecutor@23cc2a28 | org.apache.activemq.util.ThreadPoolUtils | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Stopped transport: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Connection Stopped: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,902 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-5 So the question is: how can I find out the process that is trying to connect to my ActiveMQ (from localhost) every 2 seconds?

    Read the article

  • Ideal directory structure for web application

    - by rno
    I'm about to create a user based website and will have to store photo, docs and other data for each user. If I take a silly number like 1 000 000 000 users, I believe than one folder with 1 000 000 000 won't be the fastest thing in the world! So I was thinking of creating something like 1st level : [a-z] 2nd level : [a-z] 3rd level : [a-z] Therefor bobby will be in /b/o/b/by But this also mean that it won't be spread equaly, because there will be very few user starting with a z and many more with a m,s,l ... so I was thinking of using a user id such as "000000000001", "000000000001" etc... 1st level : [000-999] 2nd level : [000-999] 3rd level : [000-999] therefore data of the user 000000000001 will be store in /data/000/000/000/001 then I will be sure to have a maximum of 1000 folder in each level. What do you guys think about it, what I should do or not do ? The server will be running Centos 5.4 with EXT3 on raid 1, if the I/O get's too bad i will probably go for a raid 10.

    Read the article

  • Eclipse Crashes on Ubuntu 11.10

    - by Adrian Matteo
    I'm using Eclipse Indigo with aptana, to develope a rails application and it was working fine, but now it keeps crashing on startup. It opens and when the loading bars appear on the status bar, it goes gray (not responding) and the in closes without an error. Here is the output from the terminal when I ran it from there: (Eclipse:7391): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (Eclipse:7391): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (Eclipse:7391): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (Eclipse:7391): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", 2012-05-27 16:05:58.272::INFO: Logging to STDERR via org.mortbay.log.StdErrLog 2012-05-27 16:06:00.586::INFO: jetty-6.1.11 2012-05-27 16:06:00.743::INFO: Started [email protected]:8500 2012-05-27 16:06:00.744::INFO: Started [email protected]:8600 2012-05-27 16:06:01.999::INFO: jetty-6.1.11 2012-05-27 16:06:01.029::INFO: Opened /tmp/jetty_preview_server.log 2012-05-27 16:06:01.046::INFO: Started [email protected]:8000 2012-05-27 16:06:01.071::INFO: jetty-6.1.11 2012-05-27 16:06:01.016::INFO: Started [email protected]:8300 ** (Eclipse:7391): DEBUG: NP_Initialize ** (Eclipse:7391): DEBUG: NP_Initialize succeeded No bp log location saved, using default. [000:000] Browser XEmbed support present: 1 [000:000] Browser toolkit is Gtk2. [000:001] Using Gtk2 toolkit ERROR: Invalid browser function table. Some functionality may be restricted. [000:056] Warning(optionsfile.cc:47): Load: Could not open file, err=2 [000:056] No bp log location saved, using default. [000:056] Browser XEmbed support present: 1 [000:056] Browser toolkit is Gtk2. [000:056] Using Gtk2 toolkit ** (Eclipse:7391): DEBUG: NP_Initialize ** (Eclipse:7391): DEBUG: NP_Initialize succeeded ** (Eclipse:7391): DEBUG: NP_Initialize ** (Eclipse:7391): DEBUG: NP_Initialize succeeded ** (Eclipse:7391): DEBUG: NP_Initialize ** (Eclipse:7391): DEBUG: NP_Initialize succeeded java version "1.6.0_23" OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.2) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) java.io.FileNotFoundException: /home/amatteo/.eclipse/org.eclipse.platform_3.7.0_155965261/configuration/portal.1.2.7.024747/aptana/favicon.ico (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:120) at com.aptana.ide.server.jetty.ResourceBaseServlet.doGet(ResourceBaseServlet.java:136) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:829) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) 2012-05-27 16:06:03.277::WARN: /favicon.ico: java.io.IOException: /home/amatteo/.eclipse/org.eclipse.platform_3.7.0_155965261/configuration/portal.1.2.7.024747/aptana/favicon.ico (No such file or directory) It was working perfectly till a few days ago!

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

  • smartctl -t long isn't finishing

    - by xenoterracide
    I been running smartctl -t long on a drive for about 2 days now and it seems to be stalled at 10%. short and conveyance both passed. I have to send 1 of 2 drives purchased back I found badblocks with badblocks (none on this drive and I'ts made over a pass already). I'm just wondering if I should be concerned about this. smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD10EARS-00Y5B1 Serial Number: WD-WMAV51582123 Firmware Version: 80.00A80 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon May 10 22:19:52 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 241) Self-test routine in progress... 10% of test remaining. Total time to complete Offline data collection: (20100) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 231) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2 3 Spin_Up_Time 0x0027 131 131 021 Pre-fail Always - 6408 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 12 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 148 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 7 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 174 194 Temperature_Celsius 0x0022 106 102 000 Old_age Always - 41 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 99 - # 2 Extended offline Interrupted (host reset) 10% 30 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • MDT 2010 Litetouch.vbs Fails to Launch

    - by Mitch
    I have the custom image captured. Import the image and files. Prepare the customsettings.ini and the boot.ini to minimize the questions the deployment team will need to answer. Everything works like a charm on virtual machines but when I map to the scripts folder on the deployment share and double-click litetouch.vbs it creates the c:\minint folder, subfolders, and a couple of log files then nothing. Here's what the log files look like: <![LOG[Property LogPath is now = C:\MININT\SMSOSD\OSDLOGS]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Property CleanStart is now = ]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Microsoft Deployment Toolkit version: 5.1.1642.01]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Property Debug is now = FALSE]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[GetAllFixedDrives(False)]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> Anyone encounter this before or know what might be happening/not happening and can direct me in the right way? I've only found a couple of other references to this anywhere and they had no solution/cause listed either. I'm stumped.

    Read the article

  • CentOS tftp server is broken

    - by Mike Pennington
    I'm trying to run tftpd from xinetd on CentOS 6; however, I can only tftp from localhost. I have a file in /opt/tftpboot/fw.test.conf that I can retrieve if I tftp to localhost: [mpenning@localhost ~]$ tftp localhost tftp> get fw.test.conf tftp> quit [mpenning@localhost ~]$ ls fw.test.conf [mpenning@localhost ~]$ However, I cannot receive this file if I tftp to eth1 on this server (the address on eth1 is 172.16.1.4). [mpenning@localhost ~]$ sudo tshark -i eth1 udp and host 172.16.1.5 Running as user "root" and group "root". This could be dangerous. Capturing on eth1 0.000000 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 5.000133 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 10.000184 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 15.000297 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 20.000331 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 ^C5 packets captured [mpenning@localhost ~]$ I have the following xinetd configuration: [root@localhost mpenning]# cat /etc/xinetd.d/tftp # default: off # description: The tftp server serves files using the trivial file transfer \ # protocol. The tftp protocol is often used to boot diskless \ # workstations, download configuration files to network-aware printers, \ # and to start the installation process for some operating systems. service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /opt/tftpboot disable = no per_source = 11 cps = 100 2 flags = IPv4 } [root@localhost mpenning]#

    Read the article

  • How do I classify using SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. I gave the parameters exactly as in the example provided in the documentation itself. The output I obtained was stats = autoc: [1.857855266614132e+000 1.857955341199538e+000] contr: [5.103143332457753e-002 5.030548650257343e-002] corrm: [9.512661919561399e-001 9.519459060378332e-001] corrp: [9.512661919561385e-001 9.519459060378338e-001] cprom: [7.885631654779597e+001 7.905268525471267e+001] cshad: [1.219440700252286e+001 1.220659371449108e+001] dissi: [2.037387269065756e-002 1.935418927908687e-002] energ: [8.987753042491253e-001 8.988459843719526e-001] entro: [2.759187341212805e-001 2.743152140681436e-001] homom: [9.930016927881388e-001 9.935307908219834e-001] homop: [9.925660617240367e-001 9.930960070222014e-001] maxpr: [9.474275457490587e-001 9.474466930429607e-001] sosvh: [1.847174384255155e+000 1.846913030238459e+000] savgh: [2.332207337361002e+000 2.332108469591401e+000] svarh: [6.311174784234007e+000 6.314794324825067e+000] senth: [2.663144677055123e-001 2.653725436772341e-001] dvarh: [5.103143332457753e-002 5.030548650257344e-002] denth: [7.573115918713391e-002 7.073380266499811e-002] inf1h: [-8.199645492654247e-001 -8.265514568489666e-001] inf2h: [5.643539051044213e-001 5.661543271625117e-001] indnc: [9.980238521073823e-001 9.981394883569174e-001] idmnc: [9.993275086521848e-001 9.993404634013308e-001] The thing is, I run the program for three images. But all three gave me the same output. When I used graycoprops() stat = Contrast: 4.721877658740964e+005 Correlation: -3.282870417955449e-003 Energy: 8.647689474127760e-006 Homogeneity: 8.194621855726478e-003 stat = Contrast: 2.817160447307697e+004 Correlation: 2.113032196952781e-005 Energy: 4.124904827799189e-004 Homogeneity: 2.513567163994905e-002 stat = Contrast: 7.086638436309059e+004 Correlation: 2.459637878221028e-002 Energy: 4.640677159445994e-004 Homogeneity: 1.158305728309460e-002 The images are:

    Read the article

  • How should I track approval workflow when users at every security level can create a request?

    - by Eric Belair
    I am writing a new application that allows users to enter requests. Once a request is entered, it must follow an approval workflow to be finally approved by a user the highest security level. So, let's say a user at Security Level 1 enters a request. This request must be approved by his superior - a user at Security Level 2. Once the Security Level 2 user approves it, it must be approved by a user at Security Level 3. Once the Security Level 3 user approves it, it is considered fully approved. However, users at any of the three Security Levels can enter requests. So, if a Security Level 3 user enters a request, it is automatically considered "fully approved". And, if a Security Level 2 user enters a request, it must only be approved by a Security Level 3 user. I'm currently storing each approval status in a Database Log Table, like so: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 1 USER_SUBMIT 2012-09-01 00:00:00.000 2 1 APPROVED_LEVEL2 2012-09-01 01:00:00.000 3 1 APPROVED_LEVEL3 2012-09-01 02:00:00.000 4 2 USER_SUBMIT 2012-09-01 02:30:00.000 5 2 APPROVED_LEVEL2 2012-09-01 02:45:00.000 My question is, which is a better design: Record all three statuses for every request ...or... Record only the statuses needed according to the Security Level of the user submitting the request In Case 2, the data might look like this for two requests - one submitted by Security Level 2 User and another submitted by Security Level 3 user: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 3 APPROVED_LEVEL2 2012-09-01 01:00:00.000 2 3 APPROVED_LEVEL3 2012-09-01 02:00:00.000 3 4 APPROVED_LEVEL3 2012-09-01 02:00:00.000

    Read the article

  • R- delete rows in multiple columns by unique number

    - by Vincent Moriarty
    Given data like this C1<-c(3,-999.000,4,4,5) C2<-c(3,7,3,4,5) C3<-c(5,4,3,6,-999.000) DF<-data.frame(ID=c("A","B","C","D","E"),C1=C1,C2=C2,C3=C3) How do I go about removing the -999.000 data in all of the columns I know this works per column DF2<-DF[!(DF$C1==-999.000 | DF$C2==-999.000 | DF$C3==-999.000),] But I'd like to avoid referencing each column. I am thinking there is an easy way to reference all of the columns in a particular data frame aka: DF3<-DF[!(DF[,]==-999.000),] or DF3<-DF[!(DF[,(2:4)]==-999.000),] but obviously these do not work And out of curiosity, bonus points if you can me why I need that last comma before the ending square bracket as in: ==-999.000),]

    Read the article

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • Why does ElapsedTicks X 10 000 not equal ElapsedMilliseconds for .Net's Stopwatch?

    - by uriDium
    I am trying to performance test some code. I am using a stopwatch. When I output the number of milliseconds it always tells me 0 so I thought that I would try the number of ticks. I am seeing that the number of ticks is about 20 000 to 30 000. Looking at the MSDN at TimeSpan.TicksPerMillisecond it says that is 10 000 ticks per millisecond. In that case why are the elapsed milliseconds on my stopwatch not appearing as 2 or 3? What am I missing? I have even outputed the result on the same line. This is what I get. Time taken: 26856 ticks, 0 ms And it is constant.

    Read the article

  • Jquery Slidetoggle open 1 div and close another

    - by Stephen
    I'm trying to close one div when clicking on another div . Currently, it opens multiple divs at one time. JQUERY: $(document).ready(function() { $(".dropdown dt a").click(function() { var dropID = $(this).closest("dl").attr("id"); $("#"+dropID+" dd ul").slideToggle(200); return false; }); $(".dropdown dd ul li a").click(function() { var dropID = $(this).closest("dl").attr("id"); var text = $(this).html(); var selVal = $(this).find(".dropdown_value").html(); $("#"+dropID+" dt a").html(text); $("#"+dropID+" dd ul").hide(); return false; }); $("dl[class!=dropdown]").click(function() { $(".dropdown dd ul").hide(); return false; }); $("id!=quotetoolContainer").click(function() { $(".dropdown dd ul").hide(); return false; }); $('body').click(function() { $(".dropdown dd ul").hide(); return false; }); $('.productSelection').children().hover(function() { $(this).siblings().stop().fadeTo(200,0.5); }, function() { $(this).siblings().stop().fadeTo(200,1); }); }); HTML: <div id="quotetoolContainer"> <div class="top"></div> <div id="quotetool"> <h2>Instant Price Calculator</h2> <p>Document Type</p> <dl id="docType" class="dropdown"> <dt><a href="#"><span>Select a Document Type</span></a></dt> <dd> <ul> <li><a href="#" id="1">Datasheets<span class="value">Datasheets</span></a></li> <li><a href="#">Manuals<span class="value">Manuals</span></a></li> <li><a href="#">Brochures<span class="value">Brochures</span></a></li> <li><a href="#">Newsletters<span class="value">Newsletters</span></a></li> <li><a href="#">Booklets<span class="value">Booklets</span></a></li> </ul> </dd> </dl> <p>Flat Size</p> <dl id="flatSize" class="dropdown"> <dt><a href="#">8.5" x 11"<span class="value">8.5" x 11"</span></a></dt> <dd> <ul> <li><a href="#">8.5" x 11"<span class="value">8.5" x 11"</span></a></li> <li><a href="#">11" x 17"<span class="value">11" x 17"</span></a></li> </ul> </dd> </dl> <p>Full Color or Black &amp; White?</p> <dl id="color" class="dropdown"> <dt><a href="#">Full Color<span class="value">Full Color</span></a></dt> <dd> <ul> <li><a href="#">Full Color<span class="value">Full Color</span></a></li> <li><a href="#">Black &amp; White<span class="value">Black &amp; White</span></a></li> </ul> </dd> </dl> <p>Paper</p> <dl id="paper" class="dropdown"> <dt><a href="#">Value White Paper (20 lb.)<span class="value">Value White Paper (20 lb.)</span></a></dt> <dd> <ul> <li><a href="#">Value White Paper (20 lb.)<span class="value">Value White Paper (20 lb.)</span></a></li> <li><a href="#">Premium White Paper (28 lb.)<span class="value">Premium White Paper (28 lb.)</span></a></li> <li><a href="#">Glossy White Text (80 lb.) - Recycled<span class="value">Glossy White Text (80 lb.) - Recycled</span></a></li> <li><a href="#">Glossy White Cover (80 lb.) - Recycled<span class="value">Glossy White Cover (80 lb.) - Recycled</span></a></li> </ul> </dd> </dl> <p>Folding</p> <dl id="folding" class="dropdown"> <dt><a href="#">Fold in Half<span class="value">Fold in Half</span></a></dt> <dd> <ul> <li><a href="#">Fold in Half<span class="value">Fold in Half</span></a></li> <li><a href="#">Tri-Fold<span class="value">Tri-Fold</span></a></li> <li><a href="#">Z-Fold<span class="value">Z-Fold</span></a></li> <li><a href="#">Double-Parallel Fold<span class="value">Double-Parallel Fold</span></a></li> </ul> </dd> </dl> <p>Three-Hole Drill</p> <dl id="drill" class="dropdown"> <dt><a href="#">No<span class="value">No</span></a></dt> <dd> <ul> <li><a href="#">No<span class="value">No</span></a></li> <li><a href="#">Yes<span class="value">Yes</span></a></li> </ul> </dd> </dl> <p>Qty</p> <dl id="quantity" class="dropdown"> <dt><a href="#">50<span class="value">50</span></a></dt> <dd> <ul> <li><a href="#">50<span class="value">50</span></a></li> <li><a href="#">100<span class="value">100</span></a></li> <li><a href="#">150<span class="value">150</span></a></li> <li><a href="#">200<span class="value">200</span></a></li> <li><a href="#">250<span class="value">250</span></a></li> <li><a href="#">500<span class="value">500</span></a></li> <li><a href="#">750<span class="value">750</span></a></li> <li><a href="#">1,000<span class="value">1,000</span></a></li> <li><a href="#">1,500<span class="value">1,500</span></a></li> <li><a href="#">2,000<span class="value">2,000</span></a></li> <li><a href="#">2,500<span class="value">2,500</span></a></li> <li><a href="#">3,000<span class="value">3,000</span></a></li> <li><a href="#">3,500<span class="value">3,500</span></a></li> <li><a href="#">4,000<span class="value">4,000</span></a></li> <li><a href="#">4,500<span class="value">4,500</span></a></li> <li><a href="#">5,000<span class="value">5,000</span></a></li> <li><a href="#">5,500<span class="value">5,500</span></a></li> <li><a href="#">6,000<span class="value">6,000</span></a></li> <li><a href="#">6,500<span class="value">6,500</span></a></li> <li><a href="#">7,000<span class="value">7,000</span></a></li> <li><a href="#">7,500<span class="value">7,500</span></a></li> <li><a href="#">8,000<span class="value">8,000</span></a></li> <li><a href="#">8,500<span class="value">8,500</span></a></li> <li><a href="#">9,000<span class="value">9,000</span></a></li> <li><a href="#">9,500<span class="value">9,500</span></a></li> <li><a href="#">10,000<span class="value">10,000</span></a></li> <li><a href="#">12,500<span class="value">12,500</span></a></li> <li><a href="#">15,000<span class="value">15,000</span></a></li> <li><a href="#">17,500<span class="value">17,500</span></a></li> <li><a href="#">20,000<span class="value">20,000</span></a></li> </ul> </dd> </dl> <div id="priceTotal"> <div class="priceText">Your Price:</div> <div class="price">$29.00</div> <div class="clear"></div> </div> <div id="buttonQuoteStart"><a href="#" title="Start Printing">Start Printing</a></div> </div> <div class="bottom"></div> </div>

    Read the article

  • How to determine the source of a request in a distributed service system?

    - by Kabumbus
    Map/Reduce is a great concept for sorting large quantities of data at once. What to do if you have small parts of data and you need to reduce it all the time? Simple example - choosing a service for request. Imagine we have 10 services. Each provides services host with sets of request headers and post/get arguments. Each service declares it has 30 unique keys - 10 per set. service A: name id ... Now imagine we have a distributed services host. We have 200 machines with 10 services on each. Each service has 30 unique keys in there sets. but now to find to which service to map the incoming request we make our services post unique values that map to that sets. We can have up to or more than 10 000 such values sets on each machine per each service. service A machine 1 name = Sam id = 13245 ... service A machine 1 name = Ben id = 33232 ... ... service A machine 100 name = Ron id = 777888 ... So we get 200 * 10 * 30 * 30 * 10 000 == 18 000 000 000 and we get 500 requests per second on our gateway each containing 45 items 15 of which are just noise. And our task is to find a service for request (at least a machine it is running on). On all machines all over cluster for same services we have same rules. We can first select to which service came our request via rules filter 10 * 30. and we will have 200 * 30 * 10 000 == 60 000 000. So... 60 mil is definitely a problem... I hope to get on idea of mapping 30 * 10 000 onto some artificial neural network alike Perceptron that outputs 1 if 30 words (some hashes from words) from the request are correct or if less than Perceptron should return 0. And I’ll send each such Perceptron for each service from each machine to gateway. So I would have a map Perceptron <-> machine for each service. Can any one tall me if my Perceptron idea is at least “sane”? Or normal people do it some other way? Or if there are better ANNs for such purposes?

    Read the article

  • Javascript Tips and Tricks

    - by ybbest
    1. Replace all , in one Javascript string. var totalAmount= "100,000,000,000"; var find= ","; //Replace the first , with the empty string var replace=""; totalAmount= totalAmount.replace(find,replace); alert(totalAmount); var totalAmount2= "100,000,000,000"; var newFind=/,/g //Replace all , with empty string totalAmount2= totalAmount2.replace(newFind,replace); alert(totalAmount2);

    Read the article

  • Javascript Tips and Tricks

    - by ybbest
    1. Replace all , in one Javascript string. var totalAmount= "100,000,000,000"; var find= ","; //Replace the first , with the empty string var replace=""; totalAmount= totalAmount.replace(find,replace); alert(totalAmount); var totalAmount2= "100,000,000,000"; var newFind=/,/g //Replace all , with empty string totalAmount2= totalAmount2.replace(newFind,replace); alert(totalAmount2);

    Read the article

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >