Search Results

Search found 90197 results on 3608 pages for 'data type conversion'.

Page 278/3608 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Retrieving the first digit of a number

    - by Michoel
    Hi, I am just learning Java and am trying to get my program to retrieve the first digit of a number - for example 543 should return 5, etc. I thought to convert to a string, but I am not sure how I can convert it back? Thanks for any help. int number = 534; String numberString = Integer.toString(number); char firstLetterChar = numberString.charAt(0); int firstDigit = ????

    Read the article

  • prototype to jquery, help please

    - by Patrique
    Hello, I would ask for any user who knows how to program in jquery and prototype of a help to me in the following code in this prototype. function showPanelAds(){ $('ads').style.visibility="visible" } and function blog(id){ var ActionAjax = new Ajax.Updater( {success:'blogphere'}, '/inc/assistidos.asp', { method:'get', parameters:'queryname='+id, onFailure:function(){ $('blogphere').innerHTML="error...<br/><a href=\"javascript:blog('"+id+"')\">Tente novamente</a>"; } }); } thank you from anyone who can help me. thank you.

    Read the article

  • C++ putting a 2d array of floats into a char*

    - by sam
    Hello, I'm trying to take a 2d vector of floats (input) and put them into a char* (output) in c++. void foo(const std::vector<std::vector<float> > &input, char* &output ) { char charBuf[sizeof(output)]; int counter = 0; for(unsigned int i=0; i<input.size(); i++) { for(unsigned int p=0; p<input.at(i).size(); p++) { //what the heck goes here } }

    Read the article

  • Python: Matching & Stripping port number from socket data

    - by tobywuk
    Hello, I have data coming in to a python server via a socket. Within this data is the string '<port>80</port>' or which ever port is being used. I wish to extract the port number into a variable. The data coming in is not XML, I just used the tag approach to identifying data for future XML use if needed. I do not wish to use an XML python library, but simply use something like regexp and strings. What would you recommend is the best way to match and strip this data? I am currently using this code with no luck: p = re.compile('<port>\w</port>') m = p.search(data) print m Thank you :)

    Read the article

  • Adding data sources for unixODBC/isql on Mac OSX Lion

    - by NP01
    I have installed unixODBC from source and mysql-odbc connector from .dmg installer on Mac OSX Lion. This was done a while ago, and at that time I successfully installed a data source (let's call it foo). Now I am trying to add another data source (DSN). I've done this through both ODBC Manager and the command-line tool myodbc-installer given with the tar bundle of the mysql-odbc connector from the mysql website. An entry shows up in /Library/ODBC/odbc.ini, which looks like this: [ODBC Data Sources] bar = MySQL ODBC 5.1 Driver [ODBC] Trace = 0 TraceAutoStop = 0 TraceFile = TraceLibrary = [myodbc] Driver = /usr/local/lib/libmyodbc5.so SERVER = localhost PORT = 3306 [bar] Driver = /usr/local/lib/libmyodbc5.so Description = DATABASE = bar However, isql fails to find it: anitya:Preferences neil$ isql bar bar bar -v [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect Weird thing is, the old DSN foo, which is not to be seen in /Library/ODBC/odbc.ini or /etc/odbc.ini, works fine: anitya:Preferences neil$ isql foo foo foo +---------------------------------------+ | Connected! | | | | sql-statement | | help [tablename] | | quit | | | +---------------------------------------+ SQL> I'm miffed about where the DSN entries need to be entered on OSX Lion to be found by isql. Thanks in advance for your help!

    Read the article

  • Hard drive mounted at / , duplicate mounted hard drive after using MountManager

    - by HellHarvest
    possible duplicate post I'm running 12.04 64bit. My system is a dual boot for both Ubuntu and Windows7. Both operating systems are sharing the drive named "Elements". My volume named "Elements" is a 1TB SATA NTFS hard drive that shows up twice in the side bar in nautilus. One of the icons is functional and even has the convenient "eject" icon next to it. Below is a picture of the left menu in Nautilus, with System Monitor-File Systems tab open on top of it. Can someone advise me about how to get rid of this extra icon? I think the problem is much more deep-rooted than just a GUI glitch on Nautilus' part. The other icon does nothing but spit out the following error when I click on it (image below). This only happened AFTER I tried using Mount Manager to automate mounting the drive at start up. I've already uninstalled Mount Manager, and restarted, but the problem didn't go away. The hard drive does mount automatically now, so I guess that's cool. But now, every time I boot up now and open Nautilus, BOTH of these icons appear, one of which is fictitious and useless. According to the image above and the outputs of several other commands, it appears to be mounted at / In which case, no matter where I am in Nautilus when I try to click on that icon, of course it will tell me that that drive is in use by another program... Nautilus. I'm afraid of trying to unmount this hard drive (sdb6) because of where it appears to be mounted. I'm kind of a noob, and I have this gut feeling that tells me trying to unmount a drive at / will destroy my entire file system. This fear was further strengthened by the output of "$ fsck" at the very bottom of this post. Error immediately below when that 2nd "Elements" hard drive is clicked in Nautilus: Unable to mount Elements Mount is denied because the NTFS volume is already exclusively opened. The volume may be already mounted, or another software may use it which could be identified for example by the help of the 'fuser' command. It's odd to me that that error message above claims that it's an NTFS volume when everything else tell me that it's an ext4 volume. The actual hard drive "Elements" is in fact an NTFS volume. Here's the output of a few commands and configuration files that may be of interest: $ fuser -a / /: 2120r 2159rc 2160rc 2172r 2178rc 2180rc 2188r 2191rc 2200rc 2203rc 2205rc 2206r 2211r 2212r 2214r 2220r 2228r 2234rc 2246rc 2249rc 2254rc 2260rc 2261r 2262r 2277rc 2287rc 2291rc 2311rc 2313rc 2332rc 2334rc 2339rc 2343rc 2344rc 2352rc 2372rc 2389rc 2422r 2490r 2496rc 2501rc 2566r 2573rc 2581rc 2589rc 2592r 2603r 2611rc 2613rc 2615rc 2678rc 2927r 2981r 3104rc 4156rc 4196rc 4206rc 4213rc 4240rc 4297rc 5032rc 7609r 7613r 7648r 9593rc 18829r 18833r 19776r $ sudo df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb6 496G 366G 106G 78% / udev 2.0G 4.0K 2.0G 1% /dev tmpfs 791M 1.5M 790M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 672K 2.0G 1% /run/shm /dev/sda1 932G 312G 620G 34% /media/Elements /home/solderblob/.Private 496G 366G 106G 78% /home/solderblob /dev/sdb2 188G 100G 88G 54% /media/A2B24EACB24E852F /dev/sdb1 100M 25M 76M 25% /media/System Reserved $ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00093cab Device Boot Start End Blocks Id System /dev/sda1 2048 1953519615 976758784 7 HPFS/NTFS/exFAT Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e8d9b Device Boot Start End Blocks Id System /dev/sdb1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sdb2 206848 392378768 196085960+ 7 HPFS/NTFS/exFAT /dev/sdb3 392380414 1465147391 536383489 5 Extended /dev/sdb5 1456762880 1465147391 4192256 82 Linux swap / Solaris /dev/sdb6 392380416 1448374271 527996928 83 Linux /dev/sdb7 1448376320 1456758783 4191232 82 Linux swap / Solaris Partition table entries are not in disk order $ cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> UUID=77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e / ext4 defaults 0 1 UUID=F6549CC4549C88CF /media/Elements ntfs-3g users 0 0 $ sudo blkid /dev/sda1: LABEL="Elements" UUID="F6549CC4549C88CF" TYPE="ntfs" /dev/sdb1: LABEL="System Reserved" UUID="5CDE130FDE12E156" TYPE="ntfs" /dev/sdb2: UUID="A2B24EACB24E852F" TYPE="ntfs" /dev/sdb6: UUID="77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e" TYPE="ext4" $ sudo blkid -c /dev/null (appears to be exactly the same as above) /dev/sda1: LABEL="Elements" UUID="F6549CC4549C88CF" TYPE="ntfs" /dev/sdb1: LABEL="System Reserved" UUID="5CDE130FDE12E156" TYPE="ntfs" /dev/sdb2: UUID="A2B24EACB24E852F" TYPE="ntfs" /dev/sdb6: UUID="77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e" TYPE="ext4" $ mount /dev/sdb6 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /media/Elements type fuseblk (rw,noexec,nosuid,nodev,allow_other,blksize=4096) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) /home/solderblob/.Private on /home/solderblob type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=76a47b0175afa48d,ecryptfs_fnek_sig=391b2d8b155215f7) gvfs-fuse-daemon on /home/solderblob/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=solderblob) /dev/sdb2 on /media/A2B24EACB24E852F type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) /dev/sdb1 on /media/System Reserved type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) $ ls -a . A2B24EACB24E852F Ubuntu 12.04.1 LTS amd64 .. Elements System Reserved $ cat /proc/mounts rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev devtmpfs rw,relatime,size=2013000k,nr_inodes=503250,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,relatime,size=809872k,mode=755 0 0 /dev/disk/by-uuid/77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e / ext4 rw,relatime,user_xattr,acl,barrier=1,data=ordered 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0 none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0 /dev/sda1 /media/Elements fuseblk rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 /home/solderblob/.Private /home/solderblob ecryptfs rw,relatime,ecryptfs_fnek_sig=391b2d8b155215f7,ecryptfs_sig=76a47b0175afa48d,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs 0 0 gvfs-fuse-daemon /home/solderblob/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0 /dev/sdb2 /media/A2B24EACB24E852F fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 /dev/sdb1 /media/System\040Reserved fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 gvfs-fuse-daemon /root/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=0,group_id=0 0 0 $ fsck fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) /dev/sdb6 is mounted. WARNING!!! The filesystem is mounted. If you continue you ***WILL*** cause ***SEVERE*** filesystem damage. Do you really want to continue<n>? no check aborted.

    Read the article

  • ULogd2.x - Documents - IPFIX data generation

    - by Gomathivinayagam
    I would like to generate IPFIX data from the packets that are coming to my local system as part of experimentation. It seems ULogd is a good tool to do that. I am able to capture PCAP data. But there are very less documents available on ULogd2.x about IPFIX format data generation.(There are very few examples provided in ulogd.conf). Can you provide me any links that describes about how to generate IPFIX data using ulogd2.x? 1) What are the options available? I saw there is polling interval configuration. But I have no idea how does it work? 2) If I set hash_enable = 0, and uncomment the polling_interval value, I'm getting an exception as NFCT plugin requires hash table, evne though I have specified hash_buckets and hash_max_entries. Could you help on this? 3) In general, I would like to know how NFCT plugin works in ulogd2.x. I sent mail to ulogd mailing list, but there are no replies. Could you shed some light?

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • Accidentally mounted a ReiserFS drive as MBR on my windows box - how do I recover?

    - by Ryan
    I had a WD Netcenter with a 160GB drive that kept dropping off the network. I opened up the enclosure and removed the hard drive, connected to a Windows box without knowing the drive used ReiserFS.... When mounting on the Windows box, I chose "MBR" as filesystem. 70GB of data corrupted: 90% of data is word documents, excel spreadsheets, and jpg's - all mission critical. Attempted recovery on Linux box (ubuntu) using TestDisk: I could see the container, but couldn't get anything out – according to TestDisk this was because I chose "none" as filesystem. Attempted recovery using Nucleus Kernel Recovery for windows: 98% of what was recovered is incomplete and/or unusable. I need to know if a way exists to recover or rebuild original ReiserFS MBR, or what tools/techniques might give me the best results in recovering the data. Found a Windows version of TestDisk and I ran it yesterday - here are the results: TestDisk 6.14-WIP, Data Recovery Utility, May 2012 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 160 GB / 149 GiB - CHS 19457 255 63 The harddisk (160 GB / 149 GiB) seems too small! (< 519 GB / 483 GiB) Check the harddisk size: HD jumpers settings, BIOS detection... The following partitions can't be recovered: Partition Start End Size in sectors > ReiserFS 3.6 62 241 8 19458 0 18 311581568 ReiserFS 3.6 62 248 55 19458 8 2 311581568 ReiserFS 3.6 62 254 37 19458 13 47 311581568 ReiserFS 3.6 63 6 28 19458 20 38 311581568 ReiserFS 3.6 63 13 11 19458 27 21 311581568 ReiserFS 3.6 63 21 43 19458 35 53 311581568 ReiserFS 3.6 63 27 41 19458 41 51 311581568 ReiserFS 3.6 63 37 35 19458 51 45 311581568 ReiserFS 3.6 63 54 20 19458 68 30 311581568 ReiserFS 3.6 63 76 26 19458 90 36 311581568

    Read the article

  • SQL 2008 Replication corrupt data problem

    - by Jonathan K
    We took a SQL 2000 database. Took a lightspeed backup. Restored on SQL 2008 active/passive cluster. Then setup replication to replicate the data back to SQL 2000. So 2008 is the publisher/distributor, and 2000 is doing a pull subscription. Everything works well, execpt we occassionally get corrupt data in varchar/text fields on the subscriber. So for example we have a table with 4500 records. When we run this statement: update MedstaffProvider set Notes = 'Cell Phone: 360.123.4567 Answering Service: 360.123.9876' where LastName = 'smith' The record in the 2008 database is updated as expected. But in the subsriber datbase we'll get gibberish in the notes field: óPÌ[1] T $Oé[1] ð²ñ. K Here's what we know: This is repeatable, meaning we can run that same query all day long and get the same gibberish. If you alter update statement slightly the data gets replicated just fine. The collation on both databases is the same. So far we've only detected the problem with text/varchar fields. (The notes field above is text). Only one or two records in a table are impacted. The table structure looks identical in both 2000/2008. We haven't made any changes. We have found one solution that fixes the problem. Basically if we recreate the table in 2008 (say as MedStaffProvider2) and then insert all the data. Drop the original table. Rename the table to it's original name. Setup replication again. And run the exact same update statement it works as expected. Does anyone have any idea what might be happening here? Or are there any other techniques we can use to troubleshoot this? I've found a solution for this, but would really like to undertsand why this is happening.

    Read the article

  • What GPT partition type to use for protecting DRBD metadata?

    - by Carsten Scholtes
    I'm planning to install a DRBD device on a (replicated) disk with two GPT partitions. DRBD requires some space for (preferentially "internal") metadata at the end of the underlying device. I'm hesitant to leave this space unpartitionend (or unformatted in a normal partition). I'd like to reserve an extra partition at the end of the underlying disk device for the metadata. (If I understand correctly, DRBD would not care about the partition or its type and could then use that space exclusively.) My question is: Which would be a suitable GPT partition type for such a metadata partition? It should not be a type interpreted while booting (such as EF00 EFI System). It should not be a type prone to be modified accidentialy by the booted OS (such as 8200 Linux swap, 8e00 Linux LVM, fd00 Linux raid). (The booted OS will be Ubuntu Linux 12.04.3.) It should not be a type indicating a normal filesystem (such as 0c01 or 8301), prone to be formatted correspondingly. It should not be a type requiring any special content in the partition (since the content is to be handled exclusively by DRBD). It should express the purpose of being reserved for something special (namely DRBD). (The types I listed are as provided by gdisk. I'm thinking about using some type unlikely to be used by the OS (maybe bf0a Solaris Reserved 4) or an invented(?) type such as fd01 (close to fd00 Linux raid…). Would something like this be suitable, too dangerous or even possible?)

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • Recovering Data from a Linkstation LS-WXL/R1

    - by kingkool68
    I've been running a Buffalo Linkstation LS-WXL/R1 in RAID1 for a few weeks. Two nights ago we had a brief power outage. Yesterday when I tried to mount the disk it couldn't be found. I logged in to the web admin and it couldn't see any storage attached and no arrays available. Well this sucks. I ended up taking the disks out and mounting them to an Ubuntu virtual machine. They how up as an Array but I can't start the Array to the best of my limited knowledge. I could still see the 6 partitions, so I'm confident the data is there. I can use Photorec (http://www.cgsecurity.org/wiki/PhotoRec) to recover most of the files I want, it just takes some time. Could I make an image of the data partition and mount that in Ubuntu so I can get at the data I want through the file system? 95% of the data on my Linkstation LS-WXL/R1 is backup. I just put a few folders of images on there that I need to get back. I'm already preparing to format the disks when I'm done and re-building the RAID1 array from scratch. Any advice would be appreciated.

    Read the article

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

  • How to Programmatically Split and Manipulate Rows of Data From Excel

    - by Charlene
    I am hoping one of you will be able to help get me started on this issue. I need to create some sort of macro or VBA code to split and manipulate rows of data in Excel. For this example, we have 5 rows of data. The first 3 rows are item information for Order # 0000000000-00 and the last 2 rows are item information for order # 0000000000-01. I need one row ("HDR") for each order number, and one row ("ITM") for each product per order. I have included an example below showing the data I will receive and the desired outcome. Raw Data: order-id product-num date buyer-name product-name quantity-purchased 0000000000-00 10000000000000 5/29/2014 John Doe Product 0 1 0000000000-00 10000000000001 5/29/2014 John Doe Product 1 2 0000000000-00 10000000000002 5/29/2014 John Doe Product 2 1 0000000000-01 10000000000002 5/30/2014 Jane Doe Product 2 1 0000000000-01 10000000000003 5/30/2014 Jane Doe Product 3 1 Desired Outcome: HDR 0000000000-00 John Doe 5/29/2014 ITM 10000000000000 Product 0 1 ITM 10000000000001 Product 1 2 ITM 10000000000002 Product 2 1 HDR 0000000000-01 Jane Doe 5/30/2014 ITM 10000000000002 Product 2 1 ITM 10000000000003 Product 3 1 Any and all help would be much appreciated!!! Thank you.

    Read the article

  • Fast distributed filesystem for a large amounts of data with metadata in database

    - by undefined hero
    My project uses several processing machines and one storage machine. Currently storage organized with a MSSQL filetable shared folder. Every file in storage have some metadata in database. Processing machines executes tasks for which they needed files from storage and their metadata. After completing task, processing machine puts resulting data back in storage. From there its taken by another processing machine, which also generates some file and put it back in storage. And etc. Everything was fine, but as number of processing machines increases, I found myself bottlenecked myself with storage machines hard drive performance. So I want processing machines to put files in distributed FS. to lift load from storage machines, from which they can take data from each other, not only storage machine. Can You suggest a particular distributed FS which meets my needs? Or there is another way to solve this problem, without it? Amounts of data in FS in one time are like several terabytes. (storage can handle this, but processors cannot). Data consistence is critical. Read write policy is: once file is written - its constant and may be only removed, but not modified. My current platform is Windows, but I'm ready to switch it, if there is a substantially more convenient solution on another one.

    Read the article

  • Numbering grouped data in Excel

    - by Jeff
    I have an Excel spreadsheet (2010) with data similar to this: Dogs Brown Nice Dogs White Nice Dogs White Moody Cats Black Nice Cats Black Mean Cats White Nice Cats White Mean I want to group these animals but I only care about species and color. I don't care about disposition. I want to assign group numbers to the set as shown here. 1 Dogs Brown Nice 2 Dogs White Nice 2 Dogs White Moody 3 Cats Black Nice 3 Cats Black Mean 4 Cats White Nice 4 Cats White Mean I was able to select all the species and colors, then from the data tab select 'advanced', then 'unique records only'. This collapsed the data so that I could number the visible rows. Then when I 'cleared' the filter I could easily just fill the blank areas under the numbers with the number above. The problem is that my real data has far too many rows for this to be practical. Also, the trick about entering 1 in the first cell, 2 in the cell below, selecting both then dragging the corner down to 'auto-number' doesn't seem to work when you're viewing filtered rows. Any way to do this?

    Read the article

  • Solr dataimporthandler problem import data latin

    - by Alvin
    I'm using Solr 1.4 and Tomcat6. DB mysql 5.1 store data latin. when i run dataimporthandler this data = view data in solr admin error font. <doc> <str name="id">295</str> <str name="subject">Tuấn Tú</str> - ...<arr name="title"> <str>tunt721</str> </arr> </doc> True data view : <doc> <str name="id">295</str> <str name="subject">Tu?n Tú</str> - ...<arr name="title"> <str>tunt721</str> </arr> </doc> help me fix problem. Many thanks

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    I Asked this at Staock Overflow, but I would like your oppinion too as it has as much to do with administration as it does with coding. We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >