Search Results

Search found 29702 results on 1189 pages for 'select insert'.

Page 391/1189 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • how to initialize two logical drives on a HP P400i controller without reboot

    - by John
    What I am trying to do is initialize two logical drives on a HP P400i embedded controller without a reboot of the system here my current Array config: array A (SAS, Unused Space: 0 MB) logicaldrive 1 (17.9 GB, RAID 5, OK) logicaldrive 2 (17.9 GB, RAID 5, OK) logicaldrive 3 (75.9 GB, RAID 5, OK) logicaldrive 4 (25.0 GB, RAID 5, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 72 GB, OK) array B (SAS, Unused Space: 0 MB) logicaldrive 5 (99 MB, RAID 0, OK) logicaldrive 6 (68.2 GB, RAID 0, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 72 GB, OK) windows 2003 machine running the HpCISs2.sys driver version 6.20.0.32 . I have the ACU and ACU CLI tools installed version 8.28.13.0, P400i firmware version 2.74 . Now what I'd like to do is removes the physical drive 1I:1:4 and delete the two logical drives in array B. then insert a new drive in to bay 4 that contains two new logical drives and have them show up in array B again. So far after I remove the drive and delete the failed logical drives, I insert the new drive and run HPacucli rescan. I get the new drive to show up as unassinged physical drive but I cant figure out now to "for lack of a better word" mount the 2 logical drives on the new unassinged disk. If I reboot the system the array controller picks up the new fourth drive and creates Array B with the drives without problem but I'd really like to not have to reboot the server. Any ideas?

    Read the article

  • Strange windows boot problem - My computer won't boot from hard disk

    - by user29779
    Hello, Until yesterday, I had windows XP installed on my computer. After installing the OS, and setting the BIOS to boot from the hard-disk first, I discovered a strange problem - the OS wasn't booted and the only way I could get it to load was to insert a windows installation disk, enter repair mode, do "fixboot" and restart. The problem only occured when the computer was booted after shut-down. If I only restarted it, everything worked fine. Yesterday, I upgraded my XP to win7 and the problem persists. I tried the same "trick" I did with XP to get it to load, by entering repair and doing "bootrec /fixmbr" and "bootrec /fixboot" but that didn't work (and when I run "scanos" it didn't find any windows installations). Eventually, I got it to load by changing the settings in the BIOS to boot from CD first and HD second, removing the installation disk from the drive, letting it fail to boot from CD and then re-insert the disk. Anyone has any idea what may be the cause or how can I investigate this issue? Thanks! Marina

    Read the article

  • Replacing all disks in a non-OS RAID 5 volume

    - by molecule
    Hi all, We currently have a server with 8 x HDD slots. It is a HP DL380G5 with a P400 controller. 2 x HDD are in a RAID 1+0 config and this hosts the OS. 6 x HDD are in a RAID 5 config and holds an Oracle DB. Basically the RAID 5 volume is running out of space and we would like to swap all 6 with higher capacity disks. Excuse my ignorance as I am pretty new to this... I believe we will need to backup the data, delete the RAID volume, insert the new disks, recreate the volume, and restore the data. 2 questions: Do we need to worry about the OS partition or is it completely independent so we can simply take out the 6 and insert 6 new disks and get the controller to recognize the 6 new disks and form a new RAID 5 volume? We should not need to reinstall OS or Oracle correct? Since we are going to restore the data on the volume from another source (our vendor will take care of this) but we would like to keep the existing data on the 6 disks just in case we run into issues and want to fall back, is this possible? Thanks in advance.

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • How to burn a data DVD in Windows XP

    - by SabreWolfy
    I am trying to burn a data DVD (DVD+R) in Windows XP SP3 on a Dell desktop computer. The computer has a licensed copy of Nero 6.3. Nero indicates that an update to version 6.6 is available, but after following the link provided, it redirects me to the Nero website to purchase the upgrade. I'm not interested in doing this. After creating a project in Nero 6.3, inserting a blank DVD+R and trying to start burning the data DVD, Nero indicates that I should insert an appropriate disk into the drive. It does not seem to detect the blank DVD+R. I downloaded infrarecorder and cdrtfe from Sourceforge. Neither of these programs worked either. They both indicated that I should insert the correct media, with cdrtfe saying there is no disk in the drive. I tried with another blank DVD +R with the same effect. I inserted a CDR containing data into the drive and the Windows read read this CDR without a problem. I have no reason to believe that the drive is faulty. I am aware that Windows XP itself is not able to burn DVDs. However, it seems that three third-party software programs are not able to burn a data DVD in Window XP. The specifications provided in Nero indicate that DVD+R is compatible with the drive. How can I burn a backup data DVD in Windows XP?

    Read the article

  • Control copy/paste doesn't work

    - by Guest
    I have a laptop HP PAVILION dv6-3126er with Windows 7 Home Edition installed and neither control copy nor control paste doesn't work (i've tried: ctrl+c, ctrl+v, shift+c, shift+v, ctrl + insert, shift + insert). I've tried to run a system check through cmd with sfc /scannow, it repaired something, i restarted but it didn't solve the problem. I've also tried many key combinations (like alt+ctrl+fn), but nothing works in any program. In Microsoft Word 2003 in the menu i have no key combination for copy/paste near them (in my previous comp they've been there - in brackets). Shift+Delete works by the way. I brought this laptop a few weeks ago, and i discovered this problem in the first days. I have no viruses because i have had no time to even connect him to internet. Anyways, i checked it for viruses - it is clean. I don't want to do system restore, because i see no reason to do it for a pretty clean system. I hope it is not a problem with the laptop itself. Maybe there is another reason? Maybe i need to do some more system checks? Any suggestions? Thanks in advance.

    Read the article

  • "Enter/Return" Key with Word Mobile on Windows Mobile

    - by Maarx
    Some of our employees are using PDAs running Windows Mobile. I wish I could provide more data regarding versions, but frankly these things aren't my jurisdiction. Someone's simply come to me looking for what they thought would be a quick fix. They're using Word Mobile and the barcode scanner to record large volumes of data. The scanner's default action is to insert the scanned text exactly as if it had been input with the keyboard, and puts a newline at the end. That's great, because it's exactly what we need it to do: separate data with newlines. The issue comes when system can't read the barcode, and the employee has to type in the data by hand. They've discovered a very peculiar quirk of Mobile applications: pressing the hardware Enter/Return key on the keyboard appears to save and exit the application. How do we change this behavior? They've realized that using the stylus to "click" the virtual on-screen keyboard's Enter/Return key will add the necessary newline, but it's a huge inconvenience for them. How do I fix the default behavior of the Enter/Return key for Word Mobile to instead insert a newline?

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Systemd Service Start With Dynamic Port Value From Docker

    - by Sheriffen
    Using CoreOS, Docker and systemd to manage my services I want to properly perform service discovery. Since CoreOS utilizes etcd (distributed key-value) there is a very convenient way to do this. On systemd's ExecStartPost I can just insert the started service into etcd without problems. My usecase needs something like this: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": 5555 }' which works like a charm. But this is where my idea popped up. Docker has the power to randomly assign a port if I just run docker run -p 5555 which is awesome since I don't have to set it statically in the *.service file and I could possibly run multiple instances on the same host. What if I could get the randomly assigned port and insert instead of the static 5555? Turns out I can use the docker port command to get the port and with some formatting we can get just the port with $(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2) which works if I set it (using bash) like this: /usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)' }' but using systemd I just can't get it to work. This is the code I'm using: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)'}' Somehow I got something wrong but it's hard to debug since it works when typed in the terminal.

    Read the article

  • Weird rendering artefact in vim (terminal, not MacVim)

    - by Tobi Lehman
    Running Mac OS X, using either Terminal.app or iTerm2, there is a strange artefact with the character rendering that I have a hard time explaining and an even harder time understanding. I'll start with a video of my screen so that you can see and example of it in action: From the video you can see a few ways it is weird, for example, sometimes when I hit a letter in insert mode, the character is double printed. When I go into normal mode, the artefact remains. When I re-enter insert mode, hitting backspace copies the characters on the left to the position under the cursor. This has happened in OS X Lion, and Mountain Lion, under both Terminal.app and iTerm 2. This never happens under MacVim. Also, I use GNU/Linux on my other machine, and have never had this happen, I am pretty sure it is strictly a Mac OS X issue, but I do not know how to fix it. For a while, I've been working around it by using MacVim most of the time, but I prefer working in a terminal. Does anyone know what is happening here, and if so, how can I fix it? EDIT: I tried using the macvim Vim executable, and I still get strange artefacts, but they are localized to the left side of the screen, here is an example:

    Read the article

  • CD/DVD Drive not detecting locally burned CD/DVDs, but works fine with Genuine discs.

    - by Rahul
    I'm using Dell Inspiron 1420 - 32 Bit - Windows Vista, since 2.5 years. I'm facing a strange problem with my CD/DVD-drive. I cannot run/play a CD/DVD which I get burned from my friends. But when I insert Genuine CD, I'm able to play/run it. And when I try to install my Vista package which I got with my notebook, the CD/DVD gets loaded. If I insert a CD/DVD which I get from my friend, CD doesn't get loaded and the system gets hanged. But all these CDs/DVDs work on other systems. I've tested it on many of my friends PCs. So, now I'm able to run only genuine CDs & a few genuine DVDs. My Experience/Experiments: I tried to install Windows Vista using Genuine DVD - It worked I tried to install Ubuntu which I got from shipped from Canonical Ltd. - It worked I tried to install OpenSUSE .iso file burned to a DVD in my friend's PC - It didn't work for me (But working perfectly fine in my friends PCs(Tested in 4 other PCs) Tried to play a DVD containing movies, burned in my friend's PC - It didn't work for me (But working perfectly fine in my friends PCs Any help/suggestions would be appreciated. Thanks.

    Read the article

  • How to make an "import image" button/field in a PDF form?

    - by Joe
    How to make an "import image" button/field in a PDF form? I am designing a "Lost Pet" poster for a local animal shelter. The idea is to make a PDF file that users of the shelter's website can download, insert their pet's information, and then print it out if their pet goes missing. I will be designing the poster in InDesign CS3, and then exporting it to a PDF file. I will then add form fields for the user to fill out. I am ok with making text entry form fields. That is a simple matter. What's not so simple is figuring out a way to allow the user to insert a photo of their lost pet into the PDF, save the file, and then print it out with the photo in it. I am running Adobe Acrobat Professional 8. I am running it on a Mac. All of the searches I've done have told me that if I was running Acrobat in Windows, I'd have access to this other program called LiveCycle Designer which has pre-built form item libraries, including a Image Field. But I cannot find any similar option on the Mac version. Has anyone has any experience doing something like this? If so, I could certainly use some tips on making this work. Just a quick clarification... This is a volunteer design project that I am doing for the shelter, so I don't want to spend money on any extra software/addons at all. I am hoping this can be done somehow with the pre-existing software I have.

    Read the article

  • Linux usd disk just create sg device

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata).

    Read the article

  • Can't mount Linux usb disk. It just create /dev/sg device but no /dev/sd

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata). Added: I did try to mount /dev/sg3 but mount say that its not a block device. (File say Its a character special device). Added output from dmesg: [ 97.454073] usb 7-1: USB disconnect, address 2 [ 105.913055] hub 2-0:1.0: unable to enumerate USB device on port 3 [ 107.048054] usb 2-3: new high speed USB device using ehci_hcd and address 5 [ 107.162900] usb 2-3: New USB device found, idVendor=1b1c, idProduct=1ab8 [ 107.162903] usb 2-3: New USB device strings: Mfr=1, Product=2, SerialNumber=5 [ 107.162906] usb 2-3: Product: CSSD-R60GB2 [ 107.162908] usb 2-3: Manufacturer: Corsair [ 107.162910] usb 2-3: SerialNumber: 10111441000000990069 [ 107.167651] scsi7 : usb-storage 2-3:1.0 [ 108.195543] scsi 7:0:0:0: Direct-Access Corsair CSSD-R60GB2 PQ: 1 ANSI: 0 [ 108.197732] scsi 7:0:0:0: Attached scsi generic sg3 type 0

    Read the article

  • Can't mount Linux usd disk. It just create /dev/sg device but no /dev/sd

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata).

    Read the article

  • Enable bitlocker an save key to share

    - by user273694
    I have searched all over the web but cannot find a complete answer to this: How to enable Bitlocker on a laptop with TPM, and store a file with the Bitlocker recovery key and TPM password by USING THE manage-bde command line tool. The file should be the same as when created in the Bitlocker manager UI. I DO NOT want to save to AD. The same question was asked here but was not answered correctly. The goal is to write a script to be used with an endpoint manager. I have tried the following: manage-bde -on C: Works fine, but does not create or save a key. manage-bde -on C: -rk C:\myfolder\ and manage-bde -on C: -RecoveryKey C:\myfolder\ -rp The output from the last two methods state that a key has been saved to c:\myfolder and so on, but that is not the case. It also says that I have to: Save the password in a secure location Insert a USB flash drive with an external key file into the computer. Restart and run hardware test type "manage-bde -status" to check if the hardware test succeeded After a restart, I get an error saying that Bitlocker could not be enabled because the bitlocker startup key or recovery kpassword cannot be found on the USB device.... C: was not encrypted. Why am I asked to insert a USB?? I simply want to encrypt the hard drive and save the recovery information to a file automatically. Is that too much to ask? Help please!

    Read the article

  • Trying to retrieve data with a thermaltake blacX enclosure: Windows 7 believes the drive to be "uninitialized"

    - by Peeter Joot
    I have a laptop that won't boot. It appears to be a power problem ... laptop auto-turns-off within about 10 seconds of pressing the power button (with power buttons lighted temporarily and no display with or without external monitor). I've followed the dell troubleshooting guide which suggested reseating the memory modules and the hard drives, but that didn't help. Before trying to have the laptop serviced, I wanted to get some data off off the hard drive. I bought a thermaltake blacX enclosure, intending to use this to both use to retrieve the drive data with, and then later use as external storage. Following the instructions (insert cables, insert drive power on) goes fine, and Windows 7 on another laptop installs the device driver software. However, no drive letter shows up in 'Computer'. Under Computer-manage-storage I see the drive is there, and there's an option to "initialize" the drive. The Windows "initialize" dialog gives me the option to pick between "MBR" and "GPT" partitioning, which sounds like a good way to destroy the data on the drive. I'm thinking that I've purchased the wrong device for the job (or that my old drive is damaged). The old drive to recover info from is a Western Digital 500G/7200rpm SATA drive if that is relavent. Both the original laptop and the one I'm using for recovery are running Windows 7. Does anybody have experience with using a blacX enclosure to recover data off an already formatted drive?

    Read the article

  • Apache APC (Windows) Can I optimize these APC settings more?

    - by ar099968
    I would like to optimize APC some more but I am not sure where I could do something. First here is the stats after 1 week of running with the current configuration: General Cache Information APC Version 3.1.9 PHP Version 5.4.4 APC Host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Server Software Apache Shared Memory 1 Segment(s) with 128.0 MBytes (IPC shared memory, Windows Slim RWLOCK (native) locking) Start Time 2014/06/08 05:00:00 Uptime 6 days, 11 hours and 55 minutes File Upload Support 1 Host Status Diagrams Memory Usage Free: 99.7 MBytes (77.9%) Used: 28.3 MBytes (22.1%) Hits & Misses Hits: 510818 (99.9%) Misses: 608 (0.1%) Detailed Memory Usage and Fragmentation Fragmentation: 0.60% (609.8 KBytes out of 99.7 MBytes in 83 fragments) File Cache Information Cached Files 693 ( 35.4 MBytes) Hits 5143359 Misses 1087 Request Rate (hits, misses) 13.24 cache requests/second Hit Rate 13.24 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.01 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters -/apc.php$, -/apc_clean.php$, -.tpl.cache.php$, -.tpl.php$, -.string.cache.php$, -.string.php$ apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2M apc.num_files_hint 7000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.shm_strings_buffer 4M apc.slam_defense 0 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1

    Read the article

  • How to recover Windows Password without reinstalling if you forgot Windows password?

    - by user38908
    Usually, we can recover Windows admin password in two traditional ways. The first is to change Windows password with another admin account; the second is to recover the previous password with the windows password reset disk that had been created before you forgot the password. Take Windows XP for example, 1 At the Windows XP login prompt when the password is entered incorrectly click the reset button in the login failed window. 2 Insert the password reset diskette into the computer and click Next. 3 If the correct diskette Windows XP will open a window prompting for the new password you wish to use. However, we offen ignore the important of security until we have been locked out of computer. Fortunately, there is still the last way that can unlock your computer without reinstalling - erase Windows password with Windows password reset CD, which can recover admin password for Windows 7/XP/Vista/NT/2000/2003.... Take Windows Password unlocker for example, followings are the steps to create the reset CD 1.Download Windows Password Unlocker from Password Unlocker Official site 2.Decompress the Windows password unlocker and note that there is an .ISO image file. Burn the image file onto an blank CD with the burner freely supported by Password Unlocker. 3.Insert the newly created CD into the locked computer and re-boot it from the CD drive. 4.After launched the CD, a window pop up with all your account names(if you have several accounts) select one of the accounts that you have forgotten its password to reset it. Just one press, this software can remove Windows password instantly.

    Read the article

  • vim: mapping <control-j> key

    - by bhh1988
    When I'm in insert mode, I sometimes want to be able to move around without using the arrow keys, and without having to go back into normal mode. So in my vimrc I've set up key bindings to do this: imap <C-l> <right> imap <C-h> <left> imap <C-k> <up> imap <C-j> <down> But for some odd reason the only one that doesn't work is the last one, . Doing this in insert mode just gives no response. I'm wondering why this might be, and I just don't know where to even begin looking for the problem. It doesn't appear to be mapped to anything since nothing happens when I do it, whether I'm in the terminal or in gvim. Any pointers would be great! Thanks!

    Read the article

  • Sed issue with numbers exceeding 9

    - by Imane Fateh
    May be my problem is kinda obvious for you but I really need to get a solution. I need to generate a file.sql file from a file.csv, so I use this command : cat file.csv |sed "s/\(.*\),\(.*\)/insert into table(value1, value2) values\('\1','\2'\);/g" > file.sql It works perfectly, but when the values exceed 9 (for example for \10, \11 etc...) it takes consideration of only the first number (which is \1 in this case) and ignores the rest. I want to know if I missed something or if there is another way to do it. Thank you ! EDIT : The not working example : My file.csv looks like 2013-04-01 04:00:52,2,37,74,40233964,3860,0,0,4878,174,3,0,0,3598,27.00,27 What I get insert into table val1,val2,val3,val4,val5,val6,val7,val8,val9,val10,val11,val12,val13,val14,val15,val16 values ('2013-04-01 07:39:43',2,37,74,36526530,3877,0,0,6080,2013-04-01 07:39:430,2013-04-01 07:39:431,2013-04-01 07:39:432,2013-04-01 07:39:433,2013-04-01 07:39:434,2013-04-01 07:39:435,2013-04-01 07:39:436); After the ninth element I get the first one instead of the 10th,11th etc...

    Read the article

  • Blank Mail from PHP application

    - by brettlwilliams
    Problem: Blank email from PHP web application. Confirmed: App works in Linux, has various problems in Windows server environment. Blank emails are the last remaining problem. PHP Version 5.2.6 on the server I'm a librarian implementing a PHP based web application to help students complete their assignments.I have installed this application before on a Linux based free web host and had no problems. Email is controlled by two files, email_functions.php and email.php. While email can be sent, all that is sent is a blank email. My IT department is an ASP only shop, so I can get little to no help there. I also cannot install additional libraries like PHPmail or Swiftmailer. You can see a functional copy at http://rpc.elm4you.org/ You can also download a copy from Sourceforge from the link there. Thanks in advance for any insight into this! email_functions.php <?php /********************************************************** Function: build_multipart_headers Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Creates email headers for a message of type multipart/mime This will include a plain text part and HTML. **********************************************************/ function build_multipart_headers($boundary_rand) { global $EMAIL_FROM_DISPLAY_NAME, $EMAIL_FROM_ADDRESS, $CALC_PATH, $CALC_TITLE, $SERVER_NAME; // Using \n instead of \r\n because qmail doubles up the \r and screws everything up! $crlf = "\n"; $message_date = date("r"); // Construct headers for multipart/mixed MIME email. It will have a plain text and HTML part $headers = "X-Calc-Name: $CALC_TITLE" . $crlf; $headers .= "X-Calc-Url: http://{$SERVER_NAME}/{$CALC_PATH}" . $crlf; $headers .= "MIME-Version: 1.0" . $crlf; $headers .= "Content-type: multipart/alternative;" . $crlf; $headers .= " boundary=__$boundary_rand" . $crlf; $headers .= "From: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Sender: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Reply-to: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Return-Path: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Date: $message_date" . $crlf; $headers .= "Message-Id: $boundary_rand@$SERVER_NAME" . $crlf; return $headers; } /********************************************************** Function: build_multipart_body Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Builds the email body content to go with the headers from build_multipart_headers() **********************************************************/ function build_multipart_body($plain_text_message, $html_message, $boundary_rand) { //$crlf = "\r\n"; $crlf = "\n"; $boundary = "__" . $boundary_rand; // Begin constructing the MIME multipart message $multipart_message = "This is a multipart message in MIME format." . $crlf . $crlf; $multipart_message .= "--{$boundary}{$crlf}Content-type: text/plain; charset=\"us-ascii\"{$crlf}Content-Transfer-Encoding: 7bit{$crlf}{$crlf}"; $multipart_message .= $plain_text_message . $crlf . $crlf; $multipart_message .= "--{$boundary}{$crlf}Content-type: text/html; charset=\"iso-8859-1\"{$crlf}Content-Transfer-Encoding: 7bit{$crlf}{$crlf}"; $multipart_message .= $html_message . $crlf . $crlf; $multipart_message .= "--{$boundary}--$crlf$crlf"; return $multipart_message; } /********************************************************** Function: build_step_email_body_text Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Returns a plain text version of the email body to be used for individually sent step reminders **********************************************************/ function build_step_email_body_text($stepnum, $arr_instructions, $dates, $query_string, $teacher_info ,$name, $class, $project_id) { global $CALC_PATH, $CALC_TITLE, $SERVER_NAME; $step_email_body =<<<BODY $CALC_TITLE Step $stepnum: {$arr_instructions["step$stepnum"]["title"]} Name: $name Class: $class BODY; $step_email_body .= build_text_single_step($stepnum, $arr_instructions, $dates, $query_string, $teacher_info); $step_email_body .= "\n\n"; $step_email_body .=<<<FOOTER The $CALC_TITLE offers suggestions, but be sure to check with your teacher to find out the best working schedule for your assignment! If you would like to stop receiving further reminders for this project, click the link below: http://$SERVER_NAME/$CALC_PATH/deleteproject.php?proj=$project_id FOOTER; // Wrap text to 78 chars per line // Convert any remaining HTML <br /> to \r\n // Strip out any remaining HTML tags. $step_email_body = strip_tags(linebreaks_html2text(wordwrap($step_email_body, 78, "\n"))); return $step_email_body; } /********************************************************** Function: build_step_email_body_html Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Same as above, but with HTML **********************************************************/ function build_step_email_body_html($stepnum, $arr_instructions, $dates, $query_string, $teacher_info, $name, $class, $project_id) { global $CALC_PATH, $CALC_TITLE, $SERVER_NAME; $styles = build_html_styles(); $step_email_body =<<<BODY <html> <head> <title> $CALC_TITLE </title> $styles </head> <body> <h1> $CALC_TITLE Schedule </h1> <strong>Name:</strong> $name <br /> <strong>Class:</strong> $class <br /> BODY; $step_email_body .= build_html_single_step($stepnum, $arr_instructions, $dates, $query_string, $teacher_info); $step_email_body .=<<<FOOTER <p> The $CALC_TITLE offers suggestions, but be sure to check with your teacher to find out the best working schedule for your assignment! </p> <p> If you would like to stop receiving further reminders for this project, <a href="http://{$SERVER_NAME}/$CALC_PATH/deleteproject.php?proj=$project_id">click this link.</a> </p> </body> </html> FOOTER; return $step_email_body; } /********************************************************** Function: build_html_styles Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Just returns a string of <style /> for the HTML message body **********************************************************/ function build_html_styles() { $styles =<<<STYLES <style type="text/css"> body { font-family: Arial, sans-serif; font-size: 85%; } h1 { font-size: 120%; } table { border: none; } tr { vertical-align: top; } img { display: none; } hr { border: 0; } </style> STYLES; return $styles; } /********************************************************** Function: linebreaks_html2text Author: Michael Berkowski Last Modified: October 2007 *********************************************************** Purpose: Convert <br /> html tags to \n line breaks **********************************************************/ function linebreaks_html2text($in_string) { $out_string = ""; $arr_br = array("<br>", "<br />", "<br/>"); $out_string = str_replace($arr_br, "\n", $in_string); return $out_string; } ?> email.php <?php require_once("include/config.php"); require_once("include/instructions.php"); require_once("dbase/dbfunctions.php"); require_once("include/email_functions.php"); ini_set("sendmail_from", "[email protected]"); ini_set("SMTP", "mail.qatar.net.qa"); // Verify that the email has not already been sent by checking for a cookie // whose value is generated each time the form is loaded freshly. if (!(isset($_COOKIE['rpc_transid']) && $_COOKIE['rpc_transid'] == $_POST['transid'])) { // Setup some preliminary variables for email. // The scanning of $_POST['email']already took place when this file was included... $to = $_POST['email']; $subject = $EMAIL_SUBJECT; $boundary_rand = md5(rand()); $mail_type = ""; switch ($_POST['reminder-type']) { case "progressive": $arr_dbase_dates = array(); $conn = rpc_connect(); if (!$conn) { $mail_success = FALSE; $mail_status_message = "Could not register address!"; break; } // Sanitize all the data that will be inserted into table... // We need to remove "CONTENT-TYPE:" from name/class to defang them. // Additionall, we can't allow any line-breaks in those fields to avoid // hacks to email headers. $ins_name = mysql_real_escape_string($name); $ins_name = eregi_replace("CONTENT-TYPE", "...Content_Type...", $ins_name); $ins_name = str_replace("\n", "", $ins_name); $ins_class = mysql_real_escape_string($class); $ins_class = eregi_replace("CONTENT-TYPE", "...Content_Type...", $ins_class); $ins_class = str_replace("\n", "", $ins_class); $ins_email = mysql_real_escape_string($email); $ins_teacher_info = $teacher_info ? "YES" : "NO"; switch ($format) { case "Slides": $ins_format = "SLIDES"; break; case "Video": $ins_format = "VIDEO"; break; case "Essay": default: $ins_format = "ESSAY"; break; } // The transid from the previous form will be used as a project identifier // Steps will be grouped by project identifier. $ins_project_id = mysql_real_escape_string($_POST['transid'] . md5(rand())); $arr_dbase_dates = dbase_dates($dates); $arr_past_dates = array(); // Iterate over the dates array and build a SQL statement for each one. $insert_success = TRUE; // $min_reminder_date = date("Ymd", mktime(0,0,0,date("m"),date("d")+$EMAIL_REMINDER_DAYS_AHEAD,date("Y"))); for ($date_index = 0; $date_index < sizeof($arr_dbase_dates); $date_index++) { // Make sure we're using the right keys... $ins_date_index = $date_index + 1; // The insert will only happen if the date of the event is in the future. // For dates today and earlier, no insert. // For dates today or after the reminder deadline, we'll send the email immediately after the inserts. if ($arr_dbase_dates[$date_index] > (int)$min_reminder_date) { $qry =<<<QRY INSERT INTO email_queue ( NOTIFICATION_ID, PROJECT_ID, EMAIL, NAME, CLASS, FORMAT, TEACHER_INFO, STEP, MESSAGE_DATE ) VALUES ( NULL, '$ins_project_id', '$ins_email', '$ins_name', '$ins_class', '$ins_format', '$ins_teacher_info', $ins_date_index, /*step number*/ {$arr_dbase_dates[$date_index]} /* Date in the integer format yyyymmdd */ ) QRY; // Attempt to do the insert... $result = mysql_query($qry); // If even one insert fails, bail out. if (!$result) { $mail_success = FALSE; $mail_status_message = "Could not register address!"; break; } } // For dates today or earlier, store the steps=>dates in an array so the mails can // be sent immediately. else { $arr_past_dates[$ins_date_index] = $arr_dbase_dates[$date_index]; } } // Close the connection resources. mysql_close($conn); // SEND OUT THE EMAILS THAT HAVE TO GO IMMEDIATELY... // This should only be step 1, but who knows... //var_dump($arr_past_dates); for ($stepnum=1; $stepnum<=sizeof($arr_past_dates); $stepnum++) { $email_teacher_info = ($teacher_info && $EMAIL_TEACHER_REMINDERS) ? TRUE : FALSE; $boundary = md5(rand()); $plain_text_body = build_step_email_body_text($stepnum, $arr_instructions, $dates, $query_string, $email_teacher_info ,$name, $class, $ins_project_id); $html_body = build_step_email_body_html($stepnum, $arr_instructions, $dates, $query_string, $email_teacher_info ,$name, $class, $ins_project_id); $multipart_headers = build_multipart_headers($boundary); $multipart_body = build_multipart_body($plain_text_body, $html_body, $boundary); mail($to, $subject . ": Step " . $stepnum, $multipart_body, $multipart_headers, "[email protected]"); } // Set appropriate flags and messages $mail_success = TRUE; $mail_status_message = "Email address registered!"; $mail_type = "progressive"; set_mail_success_cookie(); break; // Default to a single email message. case "single": default: // We don't want to send images in the message, so strip them out of the existing structure. // This big ugly regex strips the whole table cell containing the image out of the table. // Must find a better solution... //$email_table_html = eregi_replace("<td class=\"stepImageContainer\" width=\"161px\">[\s\r\n\t]*<img class=\"stepImage\" src=\"images/[_a-zA-Z0-9]*\.gif\" alt=\"Step [1-9]{1} logo\" />[\s\r\n\t]*</td>", "\n", $table_html); // Show more descriptive text based on the value of $format switch ($format) { case "Video": $format_display = "Video"; break; case "Slides": $format_display = "Presentation with electronic slides"; break; case "Essay": default: $format_display = "Essay"; break; } $days = (int)$days; $html_message = ""; $styles = build_html_styles(); $html_message =<<<HTMLMESSAGE <html> <head> <title> $CALC_TITLE </title> $styles </head> <body> <h1> $CALC_TITLE Schedule </h1> <strong>Name:</strong> $name <br /> <strong>Class:</strong> $class <br /> <strong>Email:</strong> $email <br /> <strong>Assignment type:</strong> $format_display <br /><br /> <strong>Starting on:</strong> $date1 <br /> <strong>Assignment due:</strong> $date2 <br /> <strong>You have $days days to finish.</strong><br /> <hr /> $email_table_html </body> </html> HTMLMESSAGE; // Create the plain text version of the message... $plain_text_message = strip_tags(linebreaks_html2text(build_text_all_steps($arr_instructions, $dates, $query_string, $teacher_info))); // Add the title, since it doesn't get built in by build_text_all_steps... $plain_text_message = $CALC_TITLE . " Schedule\n\n" . $plain_text_message; $plain_text_message = wordwrap($plain_text_message, 78, "\n"); $multipart_headers = build_multipart_headers($boundary_rand); $multipart_message = build_multipart_body($plain_text_message, $html_message, $boundary_rand); $mail_success = FALSE; if (mail($to, $subject, $multipart_message, $multipart_headers, "[email protected]")) { $mail_success = TRUE; $mail_status_message = "Email sent!"; $mail_type = "single"; set_mail_success_cookie(); } else { $mail_success = FALSE; $mail_status_message = "Could not send email!"; } break; } } function set_mail_success_cookie() { // Prevent the mail from being resent on page reload. Set a timestamp cookie. // Expires in 24 hours. setcookie("rpc_transid", $_POST['transid'], time() + 86400); } ?>

    Read the article

  • XML\Jquery create listings based on user selection

    - by Sirius Mane
    Alright, so what I need to try and accomplish is having a static web page that will display information pulled from an XML document and render it to the screen without refreshing. Basic AJAX stuff I guess. The trick is, as I'm trying to think this through I keep coming into 'logical' barriers mentally. Objectives: -Have a chart which displays baseball team names, wins, losses, ties. In my XML doc there is a 'pending' status, so games not completed should not be displayed.(Need help here) -Have a selection list which allows you to select a team which is populated from XML doc. (done) -Upon selecting a particular team from the aforementioned selection list the page should display in a separate area all of the planned games for that team. Including pending. Basically all of the games associated with that team and the dates (which is included in the XML file). (Need help here) What I have so far: HTML\JS <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <link rel="stylesheet" href="batty.css" type="text/css" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Little Batty League</title> <script type="text/javascript" src="library.js"></script> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> var IE = window.ActiveXObject ? true: false; var MOZ = document.implementation.createDocument ? true: false; $(document).ready(function(){ $.ajax({ type: "GET", url: "schedule.xml", dataType: "xml", success: function(xml) { var select = $('#mySelect'); $(xml).find('Teams').each(function(){ var title = $(this).find('Team').text(); select.append("<option/><option class='ddheader'>"+title+"</option>"); }); select.children(":first").text("please make a selection").attr("selected",true); } }); }); </script> </script> </head> <body onLoad="init()"> <!-- container start --> <div id="container"> <!-- banner start --> <div id="banner"> <img src="images/mascot.jpg" width="324" height="112" alt="Mascot" /> <!-- buttons start --> <table width="900" border="0" cellpadding="0" cellspacing="0"> <tr> <td><div class="menuButton"><a href="index.html">Home</a></div></td> <td><div class="menuButton"><a href="schedule.html">Schedule</a></div></td> <td><div class="menuButton"><a href="contact.html">Contact</a></div></td> <td><div class="menuButton"><a href="about.html">About</a></div></td> </tr> </table> <!-- buttons end --> </div> <!-- banner end --> <!-- content start --> <div id="content"> <br /> <form> <select id="mySelect"> <option>please make a selection</option> </select> </form> </div> <!-- content end --> <!-- footer start --> <div id="footer"> &copy; 2012 Batty League </div> <!-- footer end --> </div> <!-- container end --> </body> </html> And the XML is: <?xml version="1.0" encoding="utf-8"?> <Schedule season="1"> <Teams> <Team>Bluejays</Team> </Teams> <Teams> <Team>Chickens</Team> </Teams> <Teams> <Team>Lions</Team> </Teams> <Teams> <Team>Pixies</Team> </Teams> <Teams> <Team>Zombies</Team> </Teams> <Teams> <Team>Wombats</Team> </Teams> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-10T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Bluejays </Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-01-11T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-01-18T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-19T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-01-21T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-23T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Bluejays</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-01-25T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Zombies</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-27T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Bluejays</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-01-28T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Bluejays</Away_Team> <Date>2012-01-30T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-01-31T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-04T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-05T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-07T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-08T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-10T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-12T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies </Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-14T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-15T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-16T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Lions</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-01-23T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Lions</Away_Team> <Date>2012-02-24T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Pixies</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-25T09:00:00</Date> </Game> <Game status="Pending"> <Home_Team>Zombies</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-26T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Pixies</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-27T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Pixies</Away_Team> <Date>2012-02-28T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Zombies</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-04T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Zombies</Away_Team> <Date>2012-02-05T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Wombats</Home_Team> <Away_Team>Chickens</Away_Team> <Date>2012-02-07T09:00:00</Date> </Game> <Game status="Played"> <Home_Team>Chickens</Home_Team> <Away_Team>Wombats</Away_Team> <Date>2012-02-08T09:00:00</Date> </Game> </Schedule> If anybody can point me to Jquery code\modules that would greatly help me with this I'd be appreciate. Any help right now would be great, I'm just banging my head against a wall. I'm trying to avoid using XSLT transforms because I absolutely despise XML and I'm not good with it. So I'd -like- to just use Javascript\PHP\etc with only a sprinkling of the necessary XML where possible..

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >