Search Results

Search found 23226 results on 930 pages for 'date format'.

Page 338/930 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • How to use Python to read the physical address(MAC ID) [closed]

    - by getjoefree
    I want to read the physical address of the NIC model, i can get the results that i want to with SED.EXE before, but SED.EXE does not support my environment but Python ok, who have the means to do it. The general situation (not plug the network cable, it is impossible to obtain IP address): Ethernet adapter: Connection-specific DNS Suffix.: Chianet Description ...........: Marvell Yukon 88E8040 PCI-E Fast Ethernet Controller Physical Address .........: A4-BA-DB-9D-1E-8E Dhcp Enabled ...........: Yes Autoconfiguration Enabled ....: Yes Ethernet adapter 3: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Dell Wireless 1510 Wireless-N WLAN Mini-Card Physical Address. . . . . . . . . : 00-23-4D-D9-C0-28 The description of the NIC different, we can use this to fetch the corresponding physical address, base on Physical Address does not work, because the computer with the WLAN Card, I want to use Python to read my computer the card information and after Python handles an output file, output file format: SET MAC = A4BADB9D1E8E and sed format: ipconfig -all|sed -nrf getmac.sed | sed -e "s/-//g" > WINMAC.BAT getmac.sed: /Marvell Yukon 88E8040/ { n; s/.*: ([-0-9A-F]+)/set winmac=\1/p; }

    Read the article

  • litespeed issue with content-type

    - by sandeep.s85
    I am running magento with litespeed. The problem I am facing is that ajax call is being made of which header is set as x-json, but lightspeed is setting another header of text/html content type I've checked that page with apache and everything is working fine. I checked the response headers with apache and litespeed and here are they: With apache: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 05:58:47 GMT Server: Apache Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 06:58:48 GMT; path=/; domain=mydomain.com; httponly Connection: close Transfer-Encoding: chunked Content-Type: application/x-json With litespeed: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 06:10:55 GMT Server: LiteSpeed Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 07:10:55 GMT; path=/; domain=mydomain.com; httponly Content-Type: text/html; charset=UTF-8 Content-Length: 474 Vary: User-Agent I've also added application/json to mime.properties of litespeed,restarted it but that did not work. Here is the screenshot

    Read the article

  • JVM system time runs faster than HP UNIX OS system time

    - by winston
    Hello I have the following output from a simple debug jsp: Weblogic Startup Since: Friday, October 19, 2012, 08:36:12 AM Database Current Time: Wednesday, December 12, 2012, 11:43:44 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:45:38 AM Line 1 was a recorded variable during WebLogic webapp startup. Line 2 was output from database query select sysdate from dual; Line 3 was output from java code new Date() I have checked from shell date command that line 2 output conforms with OS time. The output of line 3 was mysterious. I don't know how it comes from Java VM. On another machine with same setting, the same jsp output like this: Weblogic Startup Since: Tuesday, December 11, 2012, 02:29:06 PM Database Current Time: Wednesday, December 12, 2012, 11:51:48 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:51:50 AM Another machine: Weblogic Startup Since: Monday, December 10, 2012, 05:00:34 PM Database Current Time: Wednesday, December 12, 2012, 11:52:03 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:52:07 AM Findings: the pattern shows that the longer Weblogic startup, the larger the discrepancy of OS time with JVM time. Anybody could help on HP JVM? On HP UNIX, NTP was done daily. Anyway here comes the server versions: HP-UX machinex B.11.31 U ia64 2426956366 unlimited-user license java version "1.6.0.04" Java(TM) SE Runtime Environment (build 1.6.0.04-jinteg_28_apr_2009_04_46-b00) Java HotSpot(TM) Server VM (build 11.3-b02-jre1.6.0.04-rc2, mixed mode) WebLogic Server Version: 10.3.2.0 Java properties java.runtime.name=Java(TM) SE Runtime Environment java.runtime.version=1.6.0.04-jinteg_28_apr_2009_04_46-b00 java.vendor=Hewlett-Packard Co. java.vendor.url=http\://www.hp.com/go/Java java.version=1.6.0.04 java.vm.name=Java HotSpot(TM) 64-Bit Server VM java.vm.info=mixed mode java.vm.specification.vendor=Sun Microsystems Inc. java.vm.vendor="Hewlett-Packard Company" sun.arch.data.model=64 sun.cpu.endian=big sun.cpu.isalist=ia64r0 sun.io.unicode.encoding=UnicodeBig sun.java.launcher=SUN_STANDARD sun.jnu.encoding=8859_1 sun.management.compiler=HotSpot 64-Bit Server Compiler sun.os.patch.level=unknown os.name=HP-UX os.version=B.11.31

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Cannot perform a PECL installation

    - by Petrusa
    I have been trying to do a few PECL installations, but all of them return the same type of error. Something related to timezones? Im running RedHat x86_64 es5. Attempting to install geoip-1.0.7: root@server [~]# pecl install geoip-1.0.7 downloading geoip-1.0.7.tgz ... Starting to download geoip-1.0.7.tgz (9,416 bytes) .....done: 9,416 bytes Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in PEAR/Validate.php on line 489 Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in /usr/local/lib/php/PEAR/Validate.php on line 489 3 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/geoip-1.0.7 running: /root/tmp/pear/geoip/configure checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details. ERROR: `/root/tmp/pear/geoip/configure' failed What is going on? Anyone that could assist please...

    Read the article

  • cURL hangs trying to upload file from stdin

    - by SidneySM
    I'm trying to PUT a file with cURL. This hangs: curl -vvv --digest -u user -T - https://example.com/file.txt < file This does not: curl -vvv --digest -u user -T file https://example.com/file.txt What's going on? * About to connect() to example.com port 443 (#0) * Trying 0.0.0.0... connected * Connected to example.com (0.0.0.0) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=jJakwdOewDicmqzIorLkKSiwuqfnzxF/, C=US, O=*.example.com, OU=GT01234567, OU=See www.example.com/resources/cps (c)10, OU=Domain Control Validated - ExampleSSL(R), CN=*.example.com * start date: 2010-01-26 07:06:33 GMT * expire date: 2011-01-28 11:22:07 GMT * common name: *.example.com (matched) * issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority * SSL certificate verify ok. * Server auth using Digest with user 'user' > PUT /file.txt HTTP/1.1 > User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 > Host: example.com > Accept: */* > Transfer-Encoding: chunked > Expect: 100-continue > < HTTP/1.1 100 Continue

    Read the article

  • Had almost 300 GB worth of files with random names on my computer, and now they are gone. Any idea what they were and where they went?

    - by John
    A couple of days ago I noticed I had a folder on my computer with more than 15 files in it. All the files were the exact same size (215 MB). They all had different names (just a bunch of random characters like Abe327(/-38s etc. I wasn't sure what they were so I decided to try to delete them. But then I noticed they disappeared from the D drive. Then the next day I noticed a new folder, with similar names and file sizes showed up on my C drive. The timestamps on the first set of files was almost all from a few months ago. Like the timestamps were saying 3:52 AM, 403 AM, etc. all from the same date. Then the set of files on the C drive that just appeared had yesterday's date on them. But similarly, all the files had timestamps within a 24 hour period. Like they had all just been created. Now this morning, all of those files are gone, and I didn't delete them. There are now no files like this in either drive. Any idea what these files were? Why were they so large, and why are they switching drives? Why did they disappear completely now, after the initial files were there for a few months.

    Read the article

  • Formatting a a memory stick with two partitions?

    - by Marius
    I have a 16GB memorystick which used to have a Linux partition. It therefore has two partitions; 2GB FAT32 and 14GB linux boot drive. The linux part stopped working, so I decided to reinstall it. But windows can't see that partition. I tried formatting the whole disk, but I can only format one partition (the FAT32). There seems to be no way to combine the two partitions into one big one, and there seems to be no way for windows to partition the large part of the memorystick to but Linux on it. In the windows partition manager, windows sees the large unused partition, and it let me delete it. But once I have deleted it, I'm not allowed to format it. Also I cannot delete or resize the small partition. So, to summarize: I have a memorystick with two partitons. Windows only sees one of them, and won't let me use the other one. I would like to combine the two partitions so I can install Linux on the memory stick again.

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • Connecting XRAID to Sunfire via LSI FC card - Drives not showing up

    - by Matthew Watson
    Hi, I've got a Sunfire T1000 machine ( Solaris 10 10/09 s10s_u8wos_08a SPARC ) with a LSI7404EP-LC fibre channel card in it. This is plugged into an XRAID. The system seems to have picked up the card > /usr/platform/`uname -i `/sbin/prtdiag IO Location Type Slot Path Name Model ----------- ----- ---- --------------------------------------------- ------------------------- --------- MB/PCIE0 PCIE 0 /pci@780/fibre-channel fibre-channel MB/PCIE0 PCIE 0 /pci@780/fibre-channel fibre-channel MB/NET0 PCIE MB /pci@7c0/pci@0/network@4 network-pci14e4,1668 MB/NET1 PCIE MB /pci@7c0/pci@0/network@4,1 network-pci14e4,1668 MB/NET2 PCIX MB /pci@7c0/pci@0/pci@8/network@1 network-pci108e,1648 MB/NET3 PCIX MB /pci@7c0/pci@0/pci@8/network@1,1 network-pci108e,1648 MB/PCIX PCIX MB /pci@7c0/pci@0/pci@8/scsi@2 scsi-pci1000,50 LSI,1064 However it doesn't seem to be able to see the xraid attached to it, lsiutil only reports the onboard SAS controller. > /usr/local/bin/lsiutil ~ LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009 1 MPT Port found Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC 1. mpt0 LSI Logic SAS1064 A3 105 010a0000 0 Select a device: [1-1 or 0 to quit] I've tried adding the configuration to /kernal/drv/sd.conf and /kernal/drv/ssd.conf as per this thread, however format still cannot see any drives on the xraid. I'm not sure where to go next. Any suggestions? From what I've read..this should pretty much just eb plug it in and they show up in format..

    Read the article

  • Adding add-ins to excel - strange communicates

    - by Jacob
    I am using Excel 2010 and 2013. I would like to add an excel add-in from page http://xlloop.sourceforge.net/ . There is file with name xlloop-0.3.2 and extension Microsoft Excel XLL Add-In. I added this file from menu File - Options - Add-Ins - In combobox Manage i choosed Excel Add-Ins - Go... - Browse and I choosed my file. I see the following comunicate: "C:\...\xlloop-0.3.2.xll" is not a valid add-in. Thus, I do next attempt. I go from menu File - Open - and I choosed my file. I see comunicate: The file you are trying to open "xlloop-0.3.2.xll", is in a difference format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now? After I clicked Yes I see a lot of signs (something like from chinese :)) My last attempt was double clicked on file. I see: The file format and extension of "xlloop-0.3.2.xll" don't match. The file could be corrupted or unsafe. Unless you trust its source , don't open it. Do you want open it anyway? After clicked yes I see something like the second attempt. I am really very confused because some of my friends have the same version of excel and they don't have these communicates. Do you have any idea where is the problem in my excel? I very need this addin to work with Java. I will very grateful for your help! Thanks in advance!

    Read the article

  • Windows 7 64-bit installation from alternative media (no DVD/USB Flash drive)

    - by Niels Willems
    Greetings I currently have Windows 7 x86 installed on my computer. I want to install Windows 7 x64 on a different partition on my computer. However there is a little issue, I cannot run the x64 install from Windows 7 x86 which I currently have. I was planning to Install Windows 7 x64 on another partition to then boot up from that partition to install it on the partition I actually want my OS on. Once that is complete I could just format the partition from the Windows 7 x64 that I didn't need anymore. But the installer will not run from the x86 version of Windows 7 even though I do not want to upgrade that Windows directly. The reason I'm doing this in such a weird way is that my optical drive is broken and I'm really not into buying a new one since I would use it like once every year or so. I also don't have a USB Flash Drive which is big enough to hold the installation files. As far as I'm aware I cannot use an external hard drive such as this one, which I do have. Are there any alternatives in which I can install Windows 7 x64 or am I forced into buying a USB Flash Drive or new optical drive? Thank you in advance for your replies. Edit: This picture shows my current partitions on my laptop. I want to get Windows 7 x64 on the C partition but have to install it first on the F partition to then boot up the F partition windows to format C and install x64 on that one. My external drive is J. Edit 2: No alternative computer which has a DVD drive, install files are located on an iso from MA3D. To install my 32 bit version I mounted the ISO in Daemon Tools to replace my Windows Vista but since I cannot run 64 bit into my 32 bit OS this doesn't work.

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

  • Ubuntu: unattended-upgrades from a local package archive

    - by Novelocrat
    I have a local apt archive with a bunch of packages I built in it. The Packages and Release file are generated by apt-ftparchive. The Release file looks like Date: Thu, 06 May 2010 23:04:33 UTC Label: PPL Origin: PPL Suite: ppl MD5Sum: ebec3527ebc8351468b2ef8796c19855 37325 Packages d41d8cd98f00b204e9800998ecf8427e 0 Release SHA1: a0593b663d77fde88ee35b56ae1f3c17801cfe99 37325 Packages da39a3ee5e6b4b0d3255bfef95601890afd80709 0 Release SHA256: dd73a02846aee111cac58a869c6bf650886632ba82c2172ffddd81aa4429981c 37325 Packages e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 Release I'm using unattended-upgrades to keep the machines in the lab up to date on security and bug fixes, but I'm finding that it doesn't pull from my local archive. The configuration file for it looks like // Automaticall upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu hardy-security"; "Ubuntu hardy-updates"; "PPL ppl"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. //Unattended-Upgrade::Mail "root@localhost"; Yet, when I run sudo unattended-upgrade on one of these machines, newer package versions don't get installed. Can anyone point out what I'm getting wrong?

    Read the article

  • Falsification of an email sent from lotus notes

    - by thejumper
    I needed from a person an important email with an attachment (I give the person a fictious name here: Name familyname) . The person sent me an email in the format below (I changed the content but respected the format). In the email he sent me there is two parts, first below the email he sent, and after that above the answer he received. I told the person that he didn't give me the true information because the first part of the email is falsified. Please tell me what you think. Thanks a lot. From: abc efg To: Name familyname Date: 2012-03-09 12:14 AM Subject: Lorem ipsum dolor Nam dictum feugiat neque, euismod convallis mi euismod ut. Mauris at vulputate enim. Nunc posuere tortor vitae justo volutpat luctus. Sed ut ligula id magna dictum blandit id vitae erat. Nunc dignissim eleifend vulputate. 2012/3/4 Name familyname Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce aliquam ligula in elit blandit porta. Vestibulum facilisis, elit ut aliquam euismod, erat elit tempor mi, et pulvinar velit neque ac nibh. Nullam fringilla viverra erat sed laoreet. Aenean elementum enim ac elit ultricies luctus. Name Adress. Tel. Thanks for your help.

    Read the article

  • Windows XP error message: "Windows cannot find 'explorer.exe'"

    - by Meysam
    In Windows XP I can open "My Computer" and see all the hard drives. I can also see the explorer.exe process running among other processes in Task Manager. But after opening "My Computer", when I double click on one of the drives to open it, I get the following error message: Windows cannot find 'explorer.exe'. Make sure you typed the name correctly, and then try again. To search for a file, click the start button, and then click search. Although I could detect and remove several suspicious files using Malwarebytes & Microsoft Security Essentials, the problem still remains. The interesting point is that if I right click on one folder and select Open or Explore from the menu bar, I can open the folder! but if I double click on the folder, it does not open and I get the above error message. How can I fix this problem? Any advice would be appreciated! Update: I formatted the C: drive (NTFS), a deep format, and installed a fresh Windows XP on it. I am not getting this error when I double click on C drive icon anymore. But the same error appears when I double click on other drive names. Maybe I should format them too!

    Read the article

  • Web based file search in the lan?

    - by Magnetic_dud
    I would like to search files in my lan easily. (over 500k files on SMB shares, it would take ages with other ways) I mean, i just need to do a quick search on file names, i don't care content indexing at all, as most of my files are in a proprietary format, and the file name is explicative enough. But, date range filters are a must for me. I just need a quick search like voidtools' everything can do, but in a network way The files are on a WHS box (lol, Videos and Music share names are not appropriate for a company, but a license for that win2003-based os is cheaper than an xp home one!) I tried: Lansearch pro: it is not good for me, as i need a quick index Network Search Engine: it would be perfect, but does not offer a date range filter Microsoft Search Server 2008 Express, but it is horrible! First, does NOT index filenames, and then, my Core2Duo is not powerful enough to run it smoothly. Google Desktop with a proxy on localhost to make it run on the lan, but i don't like the hacked result. The preinstalled Windows Search 4.0 but it sucks totally in choosing the relevance of data - uninstalled Docco... what's that? I am considering to try: Ibm omnifind DocFetcher (can it work as a client? did not investigated yet) Strigi (it looks like that it can work as a client, right?) Any ideas/suggestions?

    Read the article

  • installing lots of perl modules

    - by Colin Pickard
    Hi, I've been landed with the job of documenting how to install a very complicated application onto a clean server. Part of the application requires a lot of perl scripts, each of which seem to require lots of different perl modules. I don't know much about perl, and I only know one way to install the required modules. This means my documentation now looks this: Type each of these commands and accept all the defaults: sudo perl -MCPAN -e 'install JSON' sudo perl -MCPAN -e 'install Date::Simple' sudo perl -MCPAN -e 'install Log::Log4perl' sudo perl -MCPAN -e 'install Email::Simple' (.... continues for 2 more pages... ) Is there any way I can do all this one line like I can with aptitude i.e. Type the following command and go get a coffee: sudo aptitude install openssh-server libapache2-mod-perl2 build-essential ... Thank you (on behalf of the long suffering people who will be reading my document) EDIT: The best way to do this is to use the packaged versions. For the modules which were not packaged for Ubuntu 10.10 I ended up with a little perl script which I found here ) #!/usr/bin/perl -w use CPANPLUS; use strict; CPANPLUS::Backend->new( conf => { prereqs => 1 } )->install( modules => [ qw( Date::Simple File::Slurp LWP::Simple MIME::Base64 MIME::Parser MIME::QuotedPrint ) ] ); This means I can put a nice one liner in my document: sudo perl installmodules.pl

    Read the article

  • Mac "Steam needs to be online to update" - 404 fetching *_osx.zip.*

    - by Chris Boyle
    Since yesterday evening, when I launch Steam on OSX, a self-update progress bar appears instead (at 0 of 30MB or so). This bar does not advance, an error dialog appears: Steam needs to be online to update Please confirm your network connection and try again. The app then exits. This happens whether wifi or ethernet or both are connected, and pings to the outside world succeed throughout. If I look at the logs in Console, they are very similar to this example (though that's not mine). Specifically: Success! http://store.steampowered.com/public/client/steam_client_osx?date=718277 [...] Failed! http://cdn.store.steampowered.com/public/client/breakpad_osx.zip.27f59114a86fcd50533e1d7b128f9300947f9969 Failed! http://cdn.store.steampowered.com/public/client/steam_osx.zip.11a99384214805f2dd3be5084ba6be61d662f8ac Failed! http://cdn.store.steampowered.com/public/client/miles_osx.zip.d9fb546541f59c1fdd03962a605236b1021abab8 Requesting the first URL successfully returns some data including the filenames of the latter three, and requesting any of those gives me a 404 (I've tried multiple clients on multiple continents). Searches on Google and Twitter show about 10-20 others having this problem in the past 24 hours, but hardly the angry mob I'd expect if the problem affected all Steam OSX users. Things that have already been tried with no effect: Switching between wifi and ethernet. Killing all Steam processes including ipcserver. Moving the ~/Library/Application Support/Steam/registry.vdf file away. Requesting those URLs with other clients and from other locations. Interesting: that first URL with the date parameter returns the same content even without that parameter (thus would lead to the same 404s) suggesting that the problem is not necessarily specific to coming from a particular currently-installed version of Steam.

    Read the article

  • How can I find files added to the system within X minutes of a specific time?

    - by Jack W-H
    I have done a fresh install of Mac OS X Mountain Lion today on a new MacBook. Because this was a new install, when I finally got round to configuring some of my own developer things, I was surprised to find some app had installed a binary into /usr/local/bin - a single binary called galileod. Interestingly, I can't find anything online about galileod. I had only installed the bare minimum of software at this point. Looking in the file columns I can see Date Modified was 9th November 2012, but Date Added to the system was today at 17:01. It's now 10:20PM and I can't remember which software I was installing at that point. So how do I find out which other files were installed to the system within, say, 5 minutes either side of 17:01? EDIT: I found out what galileod was by running galileod --help - it is a binary used with Fitbit to communicate with the USB dongle. So that's the mystery solved - but it would still be interesting to know how to find files added within X minutes of a timeframe for future reference.

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • How to use multiple variables in time calculator c#

    - by Peter O'Dwyer
    I am building a video time calculator and need to use the number of frames instead of milliseconds. e.g 25FPS HH:MM:SS:FR 00:00:10:24 <-- Last frame of 10 seconds 00:00:11:00 <-- First frame of 11 seconds My problem is that if the start frames are lower than my end frames it can't calculate the time. 00:00:10:05 <-- Start 01:00:10:10 <-- End 00:59:59:05 <-- Answer which I can't get!! HERE'S MY CODE string Startdate = Dur_txtStart.Text; string Enddate = Dur_txtEnd.Text; string Startdate1 = Startdate.Substring(0, 8); int Startframe1 = Convert.ToInt32(Startdate.Substring(9, 2)); string Startdate2 = Enddate.Substring(0, 8); int Startframe2= Convert.ToInt32(Enddate.Substring(9, 2)); TimeSpan diff = DateTime.Parse(Startdate2).Subtract(DateTime.Parse(Startdate1)); int frameDiff = Startframe2 - Startframe1; int Hour = Convert.ToInt32(diff.Hours); int Min = Convert.ToInt32(diff.Minutes); int Sec = Convert.ToInt32(diff.Seconds); if (frameDiff < 0 && Sec > 0) { Sec -= 1; string frameDiffBuild = string.Format("{0}:{1}:{2}:{3}",Hour.ToString("D2"), Min.ToString("D2"), Sec.ToString("D2"), (Convert.ToInt32(Dur_txtFPS.Text + frameDiff)).ToString("D2")); Dur_txtOutTime.Text = frameDiffBuild; } else { string frameDiffBuild = string.Format("{0}:{1}:{2}:{3}",Hour.ToString("D2"), Min.ToString("D2"), Sec.ToString("D2"), frameDiff.ToString("D2")); Dur_txtOutTime.Text = frameDiffBuild; } Hope you can help guys my head is mashed!

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • User-friendly program to create editable & searchable pdf files, like tax & application forms and su

    - by Nick Gorbikoff
    Hello. Can somebody recommend user-friendly program that will create ( or convert from Excel & Word, or OpenOffice) editable pdf forms. You know like tax forms, that some of us filled out. Where you can create a form with predefined format and stationary, but let user edit/fill out fields. I need something user-friendly that a regular person can use. I'm NOT looking for a pdf library ( I already use wkhtmltopdf for generating pdfs programmaticaly). The reason is that we have about 400 documents ( internal expense forms, traing forms, etc) in .doc and .xls format that we want to convert to editable pdf ( so that people don't have to fill them out by hand). Coding 400 templates and then converting them using some lib or command line tool - is not my idea of fun, espsecially since those form change all the time. I'd like to just give HR and Quality department the tool, so that they can maintain those documents. I looked at everything listed on this page ( http://www.cogniview.com/convert-pdf-to-excel/post/pdf-editing-creation-50-open-sourcefree-alternatives-to-adobe-acrobat/ ), but can't find what I need. Thank you!!!

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >