Search Results

Search found 7190 results on 288 pages for 'character codes'.

Page 239/288 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • Ubuntu Server 12.04 CPU Load

    - by zertux
    I have a Server (2x Hexa-Core Xeon E5649 2.53GHz w/HT with 32GB RAM and 20000 GB Bandwidth) running Ubuntu Server 12.04 LTS. The server runs LAMP and serves one website only, the estimated number of users is to be ~ 15,000 at the same time. At the moment i have around 2000 users online each of them runs 50 MySQL queries (small values mostly select and insert) from the beginning until the end of the session. Server CPU Load is high at this number of connections while the RAM usage is almost 1GB out of 32GB its worth mentioning that the server was running very fast with no problems at all but am concerned about the load average. http://s12.postimage.org/z7hi6mz3h/photo.png top - 03:02:43 up 9 min, 2 users, load average: 50.83, 30.14, 12.83 Tasks: 432 total, 1 running, 430 sleeping, 0 stopped, 1 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 66.5%id, 33.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32939992k total, 3111604k used, 29828388k free, 84108k buffers Swap: 2048280k total, 0k used, 2048280k free, 1621640k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2860 root 20 0 25820 2288 1420 S 3 0.0 0:11.18 htop 1182 root 20 0 0 0 0 D 2 0.0 0:01.46 kjournald 1935 mysql 20 0 12.3g 161m 7924 S 1 0.5 102:31.45 mysqld 11 root 20 0 0 0 0 S 0 0.0 0:00.38 kworker/0:1 1822 www-data 20 0 247m 25m 4188 D 0 0.1 0:01.81 apache2 2920 www-data 20 0 0 0 0 Z 0 0.0 0:01.20 apache2 <defunct> 2942 www-data 20 0 247m 23m 3056 D 0 0.1 0:00.20 apache2 3516 www-data 20 0 247m 23m 3028 D 0 0.1 0:00.06 apache2 3521 www-data 20 0 247m 23m 3020 D 0 0.1 0:00.09 apache2 3664 www-data 20 0 247m 23m 3132 D 0 0.1 0:00.09 apache2 3674 www-data 20 0 247m 23m 3252 D 0 0.1 0:00.06 apache2 3713 www-data 20 0 247m 23m 3040 D 0 0.1 0:00.09 apache2 1 root 20 0 24328 2284 1344 S 0 0.0 0:03.09 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.01 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 9 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 root@server:~/codes# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 19 0 0 29684012 86112 1689844 0 0 19 590 254 231 48 0 47 5 23 0 0 29704812 86128 1697672 0 0 4 320 11100 8121 77 1 22 0 33 0 0 29671044 86156 1705308 0 0 0 5440 13190 9140 95 1 4 0 33 3 0 29670088 86160 1706288 0 0 0 32932 12275 7297 99 0 1 0 35 0 0 29693456 86188 1710724 0 0 4 676 12701 7867 98 1 1 0 ^C I have not changed any of the default configurations that comes with Ubuntu. Is this load normal for such powerful server ? is there any optimization i can make to Apache/MySQL to minimize the load ? What do you recommend ?

    Read the article

  • How to resolve `bootpd` crashing constantly on Mac OS X 10.6.4 Snow Leopard Server?

    - by morgant
    I've got a Mac Pro running Mac OS X 10.6.4 Snow Leopard Server and it's recently started getting numerous 'kNetworkError's in Server Admin.app when viewing services. It's acting as a gateway w/NAT and has been so for quite some time. There is one glaring issue, bootpd crashes all the time with the following errors in `/var/log/system.log/: Aug 12 16:54:59 servername bootpd[3572]: server starting Aug 12 16:54:59 servername bootpd[3572]: server name servername.domain.tld Aug 12 16:54:59 servername bootpd[3572]: interface en0: ip 10.0.1.9 mask 255.255.255.0 Aug 12 16:54:59 servername bootpd[3572]: bsdpd: re-reading configuration Aug 12 16:54:59 servername bootpd[3572]: bsdpd: shadow file size will be set to 48 megabytes Aug 12 16:54:59 servername bootpd[3572]: bsdpd: age time 00:15:00 Aug 12 16:54:59 servername bootpd[3572]: [3572] detected buffer overflow Aug 12 16:54:59 servername com.apple.launchd[1] (com.apple.bootpd[3572]): Job appears to have crashed: Abort trap Aug 12 16:54:59 servername com.apple.ReportCrash.Root[3571]: 2010-08-12 16:54:59.828 ReportCrash[3571:2807] Saved crash report for bootpd[3572] version ??? (???) to /Library/Logs/DiagnosticReports/bootpd_2010-08-12-165459_localhost.crash It is correctly configured to serve DHCP through en1 (not en0), the "LAN" port. This happens even with no hardware (even switches) connected to the "LAN" port. There are no DHCP clients listed. Oddly, the "Overview" shows 1 static map, but nothing is listed under "Static Maps" and there are no "Computers" in Open Directory. /var/db/dhcp_leases is empty. /Library/Logs/DiagnosticReports/bootpd_2010-08-12-165459_localhost.crash is as follows: Process: bootpd [3572] Path: /usr/libexec/bootpd Identifier: bootpd Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [1] Date/Time: 2010-08-12 16:54:59.713 -0400 OS Version: Mac OS X Server 10.6.4 (10F569) Report Version: 6 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: __abort() called Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x00007fff803c13d6 __kill + 10 1 libSystem.B.dylib 0x00007fff80461913 __abort + 103 2 libSystem.B.dylib 0x00007fff80456157 mach_msg_receive + 0 3 libSystem.B.dylib 0x00007fff803b92cf __strncpy_chk + 14 4 bootpd 0x0000000100014e5d PLCache_read + 782 5 bootpd 0x0000000100004a3d BSDPClients_init + 68 6 bootpd 0x00000001000053b5 bsdp_init + 2396 7 bootpd 0x000000010000200b S_update_services + 1228 8 bootpd 0x0000000100002344 S_server_loop + 571 9 bootpd 0x0000000100003963 main + 1766 10 bootpd 0x0000000100000984 start + 52 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x00007fff5fbfe220 rcx: 0x00007fff5fbfe218 rdx: 0x0000000000000000 rdi: 0x0000000000000df4 rsi: 0x0000000000000006 rbp: 0x00007fff5fbfe240 rsp: 0x00007fff5fbfe218 r8: 0x0000000000000001 r9: 0x0000000100114280 r10: 0x00007fff803bd412 r11: 0xffffff80002e1680 r12: 0xffffffffffffffff r13: 0x00007fff5fbfe330 r14: 0x00007fff5fbfe33b r15: 0x00007fff7009bec0 rip: 0x00007fff803c13d6 rfl: 0x0000000000000202 cr2: 0x000000010004c000 Any thoughts or suggestions as to resolving this?

    Read the article

  • (PHP) Validation, Security and Speed - Does my app have these?

    - by Devner
    Hi all, I am currently working on a building community website in PHP. This contains forms that a user can fill right from registration to lot of other functionality. I am not an Object-oriented guy, so I am using functions most of the time to handle my application. I know I have to learn OOPS, but currently need to develop this website and get it running soon. Anyway, here's a sample of what I let my app. do: Consider a page (register.php) that has a form where a user has 3 fields to fill up, say: First Name, Last Name and Email. Upon submission of this form, I want to validate the form and show the corresponding errors to the users: <form id="form1" name="form1" method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <label for="name">Name:</label> <input type="text" name="name" id="name" /><br /> <label for="lname">Last Name:</label> <input type="text" name="lname" id="lname" /><br /> <label for="email">Email:</label> <input type="text" name="email" id="email" /><br /> <input type="submit" name="submit" id="submit" value="Submit" /> </form> This form will POST the info to the same page. So here's the code that will process the POST'ed info: <?php require("functions.php"); if( isset($_POST['submit']) ) { $errors = fn_register(); if( count($errors) ) { //Show error messages } else { //Send welcome mail to the user or do database stuff... } } ?> <?php //functions.php page: function sql_quote( $value ) { if( get_magic_quotes_gpc() ) { $value = stripslashes( $value ); } else { $value = addslashes( $value ); } if( function_exists( "mysql_real_escape_string" ) ) { $value = mysql_real_escape_string( $value ); } return $value; } function clean($str) { $str = strip_tags($str, '<br>,<br />'); $str = trim($str); $str = sql_quote($str); return $str; } foreach ($_POST as &$value) { if (!is_array($value)) { $value = clean($value); } else { clean($value); } } foreach ($_GET as &$value) { if (!is_array($value)) { $value = clean($value); } else { clean($value); } } function validate_name( $fld, $min, $max, $rule, $label ) { if( $rule == 'required' ) { if ( trim($fld) == '' ) { $str = "$label: Cannot be left blank."; return $str; } } if ( isset($fld) && trim($fld) != '' ) { if ( isset($fld) && $fld != '' && !preg_match("/^[a-zA-Z\ ]+$/", $fld)) { $str = "$label: Invalid characters used! Only Lowercase, Uppercase alphabets and Spaces are allowed"; } else if ( strlen($fld) < $min or strlen($fld) > $max ) { $curr_char = strlen($fld); $str = "$label: Must be atleast $min character &amp; less than $max char. Entered characters: $curr_char"; } else { $str = 0; } } else { $str = 0; } return $str; } function validate_email( $fld, $min, $max, $rule, $label ) { if( $rule == 'required' ) { if ( trim($fld) == '' ) { $str = "$label: Cannot be left blank."; return $str; } } if ( isset($fld) && trim($fld) != '' ) { if ( !eregi('^[a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.([a-zA-Z]{2,4})$', $fld) ) { $str = "$label: Invalid format. Please check."; } else if ( strlen($fld) < $min or strlen($fld) > $max ) { $curr_char = strlen($fld); $str = "$label: Must be atleast $min character &amp; less than $max char. Entered characters: $curr_char"; } else { $str = 0; } } else { $str = 0; } return $str; } function val_rules( $str, $val_type, $rule='required' ){ switch ($val_type) { case 'name': $val = validate_name( $str, 3, 20, $rule, 'First Name'); break; case 'lname': $val = validate_name( $str, 10, 20, $rule, 'Last Name'); break; case 'email': $val = validate_email( $str, 10, 60, $rule, 'Email'); break; } return $val; } function fn_register() { $errors = array(); $val_name = val_rules( $_POST['name'], 'name' ); $val_lname = val_rules( $_POST['lname'], 'lname', 'optional' ); $val_email = val_rules( $_POST['email'], 'email' ); if ( $val_name != '0' ) { $errors['name'] = $val_name; } if ( $val_lname != '0' ) { $errors['lname'] = $val_lname; } if ( $val_email != '0' ) { $errors['email'] = $val_email; } return $errors; } //END of functions.php page ?> OK, now it might look like there's a lot, but lemme break it down target wise: 1. I wanted the foreach ($_POST as &$value) and foreach ($_GET as &$value) loops to loop through the received info from the user submission and strip/remove all malicious input. I am calling a function called clean on the input first to achieve the objective as stated above. This function will process each of the input, whether individual field values or even arrays and allow only tags and remove everything else. The rest of it is obvious. Once this happens, the new/cleaned values will be processed by the fn_register() function and based on the values returned after the validation, we get the corresponding errors or NULL values (as applicable). So here's my questions: 1. This pretty much makes me feel secure as I am forcing the user to correct malicious data and won't process the final data unless the errors are corrected. Am I correct? Does the method that I follow guarantee the speed (as I am using lots of functions and their corresponding calls)? The fields of a form differ and the minimum number of fields I may have at any given point of time in any form may be 3 and can go upto as high as 100 (or even more, I am not sure as the website is still being developed). Will having 100's of fields and their validation in the above way, reduce the speed of application (say upto half a million users are accessing the website at the same time?). What can I do to improve the speed and reduce function calls (if possible)? 3, Can I do something to improve the current ways of validation? I am holding off object oriented approach and using FILTERS in PHP for the later. So please, I request you all to suggest me way to improve/tweak the current ways and suggest me if the script is vulnerable or safe enough to be used in a Live production environment. If not, what I can do to be able to use it live? Thank you all in advance.

    Read the article

  • rsync -c -i flags identical files as different

    - by Scott
    My goal: given a list of files on local server, show any differences to the files with the same absolute path on remote server; e.g. compare local /etc/init.d/apache to same file on remote server. "Difference" for me means different checksum. I don't care about file modification times. I also do not want to sync the files (yet); only show the diffs. I have rsync 3.0.6 on both local and remote servers, which should be able to do what I want. However, it is claiming that local and remote files, even with identical checksums, are still different. Here's the command line: $ rsync --dry-run -avi --checksum --files-from=/home/me/test.txt --rsync-path="cd / && rsync" / me@remote:/ where: "me" = my username; "remote" = remote server hostname current working directory is '/' test.txt contains one line reading "/etc/init.d/apache" OS: Linux 2.6.9 Running cksum on /etc/init.d/apache on both servers yields the same result. The files are the same. However, rsync output is: me@remote's password: building file list ... done .d..t...... etc/ cd+++++++++ etc/init.d/ <f+++++++++ etc/init.d/apache sent 93 bytes received 21 bytes 20.73 bytes/sec total size is 2374 speedup is 20.82 (DRY RUN) The output codes (see http://www.samba.org/ftp/rsync/rsync.html) mean that rsync thinks /etc is identical except for mod time /etc/init.d needs to be changed /etc/init.d/apache will be sent to the remote server I don't understand how, with --checksum option, and the files having identical checksums, that rsync should think they're different. (I've tried with other files having identical mod times, and those files are not flagged as different.) I did run this in /, and made sure (AFAIK) that it's run remotely in /, so even relative pathnames will still be correct. I ran rsync with -avvvi for more debug info, but saw nothing remarkable. I'm wondering: is rsync still looking at file mod times, even with --checksum? am I somehow not setting up the path(s) right? what am I doing wrong?

    Read the article

  • segmentation fault using BaseCode encryption

    - by Natasha Thapa
    i took the code from the links below to encrypt and decrypt a text but i get segmentation fault when trying to run this any ideas?? http://etutorials.org/Programming/secure+programming/Chapter+4.+Symmetric+Cryptography+Fundamentals/4.5+Performing+Base64+Encoding/ http://etutorials.org/Programming/secure+programming/Chapter+4.+Symmetric+Cryptography+Fundamentals/4.6+Performing+Base64+Decoding/ #include <stdlib.h> #include <string.h> #include <stdio.h> static char b64revtb[256] = { -3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*0-15*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*16-31*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 62, -1, -1, -1, 63, /*32-47*/ 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, -1, -1, -1, -2, -1, -1, /*48-63*/ -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, /*64-79*/ 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, -1, -1, -1, -1, -1, /*80-95*/ -1, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, /*96-111*/ 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, -1, -1, -1, -1, -1, /*112-127*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*128-143*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*144-159*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*160-175*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*176-191*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*192-207*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*208-223*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*224-239*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 /*240-255*/ }; unsigned char *spc_base64_encode( unsigned char *input , size_t len , int wrap ) ; unsigned char *spc_base64_decode(unsigned char *buf, size_t *len, int strict, int *err); static unsigned int raw_base64_decode(unsigned char *in, unsigned char *out, int strict, int *err); unsigned char *tmbuf = NULL; static char tmpbuffer[] ={0}; int main(void) { memset( tmpbuffer, NULL, sizeof( tmpbuffer ) ); sprintf( tmpbuffer, "%s:%s" , "username", "password" ); tmbuf = spc_base64_encode( (unsigned char *)tmpbuffer , strlen( tmpbuffer ), 0 ); printf(" The text %s has been encrytped to %s \n", tmpbuffer, tmbuf ); unsigned char *decrypt = NULL; int strict; int *err; decrypt = spc_base64_decode( tmbuf , strlen( tmbuf ), 0, err ); printf(" The text %s has been decrytped to %s \n", tmbuf , decrypt); } static char b64table[64] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz" "0123456789+/"; /* Accepts a binary buffer with an associated size. * Returns a base64 encoded, NULL-terminated string. */ unsigned char *spc_base64_encode(unsigned char *input, size_t len, int wrap) { unsigned char *output, *p; size_t i = 0, mod = len % 3, toalloc; toalloc = (len / 3) * 4 + (3 - mod) % 3 + 1; if (wrap) { toalloc += len / 57; if (len % 57) toalloc++; } p = output = (unsigned char *)malloc(((len / 3) + (mod ? 1 : 0)) * 4 + 1); if (!p) return 0; while (i < len - mod) { *p++ = b64table[input[i++] >> 2]; *p++ = b64table[((input[i - 1] << 4) | (input[i] >> 4)) & 0x3f]; *p++ = b64table[((input[i] << 2) | (input[i + 1] >> 6)) & 0x3f]; *p++ = b64table[input[i + 1] & 0x3f]; i += 2; if (wrap && !(i % 57)) *p++ = '\n'; } if (!mod) { if (wrap && i % 57) *p++ = '\n'; *p = 0; return output; } else { *p++ = b64table[input[i++] >> 2]; *p++ = b64table[((input[i - 1] << 4) | (input[i] >> 4)) & 0x3f]; if (mod = = 1) { *p++ = '='; *p++ = '='; if (wrap) *p++ = '\n'; *p = 0; return output; } else { *p++ = b64table[(input[i] << 2) & 0x3f]; *p++ = '='; if (wrap) *p++ = '\n'; *p = 0; return output; } } } static unsigned int raw_base64_decode(unsigned char *in, unsigned char *out, int strict, int *err) { unsigned int result = 0, x; unsigned char buf[3], *p = in, pad = 0; *err = 0; while (!pad) { switch ((x = b64revtb[*p++])) { case -3: /* NULL TERMINATOR */ if (((p - 1) - in) % 4) *err = 1; return result; case -2: /* PADDING CHARACTER. INVALID HERE */ if (((p - 1) - in) % 4 < 2) { *err = 1; return result; } else if (((p - 1) - in) % 4 == 2) { /* Make sure there's appropriate padding */ if (*p != '=') { *err = 1; return result; } buf[2] = 0; pad = 2; result++; break; } else { pad = 1; result += 2; break; } case -1: if (strict) { *err = 2; return result; } break; default: switch (((p - 1) - in) % 4) { case 0: buf[0] = x << 2; break; case 1: buf[0] |= (x >> 4); buf[1] = x << 4; break; case 2: buf[1] |= (x >> 2); buf[2] = x << 6; break; case 3: buf[2] |= x; result += 3; for (x = 0; x < 3 - pad; x++) *out++ = buf[x]; break; } break; } } for (x = 0; x < 3 - pad; x++) *out++ = buf[x]; return result; } /* If err is non-zero on exit, then there was an incorrect padding error. We * allocate enough space for all circumstances, but when there is padding, or * there are characters outside the character set in the string (which we are * supposed to ignore), then we end up allocating too much space. You can * realloc() to the correct length if you wish. */ unsigned char *spc_base64_decode(unsigned char *buf, size_t *len, int strict, int *err) { unsigned char *outbuf; outbuf = (unsigned char *)malloc(3 * (strlen(buf) / 4 + 1)); if (!outbuf) { *err = -3; *len = 0; return 0; } *len = raw_base64_decode(buf, outbuf, strict, err); if (*err) { free(outbuf); *len = 0; outbuf = 0; } return outbuf; }

    Read the article

  • I am trying to insert value from combo box to database !! but this is not working

    - by user3675488
    At first I have taken some string in which I am trying to save the input values of textboxs and there is a combo box, I am trying to save the input value of the combo box to database, but this following code is unable to do that !!! what is the prob please help me. private void OK_Click(object sender, EventArgs e) { string sTxtcmpnyName = ""; string sTxtcmpnyAdd = ""; string sPhoneno = ""; string sTxtFax = ""; string sPanNo = ""; string sTxtTin = ""; string sTax = ""; string sAccYr = ""; string sComboSt = ""; foreach (Control c in this.Controls) ////*Adding Validation to the textbox.*// // Here the control is entered into GroupBox Grpcmp where c is denoting the name of the control into the groupbox. { // c1 is another control which denotes the textboxes under the GroupBox Grpcmp. foreach (Control c1 in c.Controls) { /////Now this following code snippets reperesents that the name of the company should not be blank. if (c1 is TextBox == true) // simpler that what you've done there { TextBox temp = (TextBox)c1; //The control is entering into Txtcompany. if (temp.Name == "Txtcompanyname") { //Condition checking is the TextBox is empty or Null then the following message will be shown. if ((temp.Text == "") || (temp.Text == "NULL")) { MessageBox.Show("Company Name should not be Blank"); } sTxtcmpnyName = temp.Text; } else if (c1.Name == "TxtcompanyAddress") { sTxtcmpnyAdd = c1.Text; } else if (c1.Name == "Txtphoneno") { sPhoneno = c1.Text; } else if (c1.Name == "TxtFax ") { sTxtFax = c1.Text; } else if (c1.Name == "Txtpanno") { sPanNo = c1.Text; } else if (c1.Name == "TxtTin") { sTxtTin = c1.Text; } else if (c1.Name == "Txtservicetax") { sTax = c1.Text; } //Now I am converting the TxtAcYr into Date format. //For this purpose two conditions are checked first. //First If the TextBox TxtAcYr is Null or empty it will show the message to enter the accountyear!! //Second If the length of the TextBox TxtAcYr is less than 10, it will again generate a message The date format should be in DD/MM/YYYY // Then the value of the use input will be picked using a For loop. if (c1.Name == "TxtAcYr") { sAccYr = c1.Text; //Here a string is taken named as yearlength and the value of the TxtAcYr is assigned to it by using control c1. //Condition Checking If the TextBox TxtAcYr is Null or empty it will show the message to enter the accountyear!! if ((c1.Text == "") || (c1.Text == "NULL")) { MessageBox.Show("Account Year should be entered!!"); } //Condition 2 is checking. Here the length of the string yearlength is whether equals to 10 or not is checked. //Because there are total 10 characters in Date Format along with special character. else // MessageBox.Show(yearlength.Length.ToString()); if (sAccYr.Length != 10) { MessageBox.Show("The Data Format DD-MM-YYYY"); } //Now the value of user will be picked by using the code snippets. else { //A string named as JK is taken for further use. String JK = ""; //This following loop is initiated to pick the user input. //The loop will check wheather the value of i is less than the length of string yearlength or not. //If Yes then it will go further. for (int i = 0; i < sAccYr.Length; i++) { //This condition is checking special characters. //The positions of special characters(Here '-') are placed at 2nd and 5th numbers. //So, the value of i can not be equals to 2 && 5. if ((i != 2) && (i != 5)) { //The new of value of year length i is assinged to the variable JK. JK = JK + sAccYr[i]; } //If the value of i is equals to 1, then enter the following. if (i == 1) { //*Should add the function of TOInt32* // If ToInt32(JK)>= the maximum length of days of a month then the following alert message will be shown. if (Convert.ToInt32(JK) >= 32) { MessageBox.Show("The Data Format DD-MM-YYYY"); } //**Comment should be added.** JK = ""; } else //If the value of i is equals to 4, then enter the following. if (i == 4) { //*Should add the function of TOInt32* // If ToInt32(JK)>= the maximum length of month then the following alert message will be shown. if (Convert.ToInt32(JK) >= 13) { MessageBox.Show("The Data Format DD-MM-YYYY"); } JK = ""; } } } } } else if (c1.Name == "state_cmb") { //sTxttate = c1.Text.ToString(); sComboSt = c1.Text; MessageBox.Show(c1.Text); } } } //////DATABASE CONNECTION///// try { SqlConnection conn = new SqlConnection(); SqlCommand cmd = new SqlCommand(); conn.ConnectionString = ("Data Source =192.168.0.2 ;database= Mee_Company; Persist Security Info =true; User ID =sa;Password = soso654321@"); conn.Open(); cmd.CommandText = ("INSERT INTO CompanyMaster(CompanyName,Address,State,Phone,Fax,PAN,TIN,STAX,AccountsYear)values('" + sTxtcmpnyName + "','" + sTxtcmpnyAdd + "','" + sComboSt + "','" + sPhoneno + "','" + sTxtFax + "','" + sPanNo + "','" + sTxtTin + "','" + sTax + "','" + sAccYr + "')"); //('" + sTxtcmpnyName + "', '" + TxtcompanyAddress.Text + "', '" + Txtphoneno.Text + "', '" + TxtFax.Text + "', '" + Txtservicetax.Text + "','" + TxtAcYr.Text + "')"); cmd.Connection = conn; cmd.ExecuteNonQuery(); conn.Close(); //cmd.Parameters.AddWithValue } catch (Exception ee) { MessageBox.Show(ee.ToString()); } } //An event is created here so that when the user will click on the Cancel Button, the Form will be closed. private void BtmCancle_Click(object sender, EventArgs e) { //this means the form. this.Close(); } //Another Event is created here named as TxtAcYr_KeyPress. //It is for making the TextBox TxtAcYr only allowance of numeric input along with special character '-'. private void TxtAcYr_KeyPress(object sender, KeyPressEventArgs e) { //If the input is number or '-' is checked //And also the backspace and delete option is enabled here. if (char.IsNumber(e.KeyChar) || e.KeyChar == (char)Keys.Back || e.KeyChar == (char)Keys.Delete || e.KeyChar == '-') { e.Handled = false; //ok } else { e.Handled = true; //not ok } }

    Read the article

  • How can I reverse mouse movement (X & Y axis) system-wide? (Win 7 x64)

    - by Scivitri
    Short Version I'm looking for a way to reverse the X and Y mouse axis movements. The computer is running Windows 7, x64 and Logitech SetPoint 6.32. I would like a system-level, permanent fix; such as a mouse driver modification or a registry tweak. Does anyone know of a solid way of implementing this, or how to find the registry values to change this? I'll settle quite happily for how to enable the orientation feature in SetPoint 6.32 for mice as well as trackballs. Long Version People seem never to understand why I would want this, and I commonly hear "just use the mouse right-side up!" advice. Dyslexia is not something which can be cured by "just reading things right." While I appreciate the attempts to help, I'm hoping some background may help people understand. I have a user with an unusual form of dyslexia, for whom mouse movements are backward. If she wants to move her cursor left, she will move the mouse right. If she wants the cursor to move up, she'll move the mouse down. She used to hold her mouse upside-down, which makes sophisticated clicking difficult, is terrible for ergonomics, and makes multi-button mice completely useless. In olden times, mouse drivers included an orientation feature (typically a hot-air balloon you dragged upward to set the mouse movement orientation) which could be used to set the relationship between mouse movement and cursor movement. Several years ago, mouse drivers were "improved" and this feature has since been limited to trackballs. After losing the orientation feature she went back to upside-down mousing for a bit, until finding UberOptions, a tweak for Logitech SetPoint, which would enable all features for all pointing devices. This included the orientation feature. And there was much rejoicing. Now her mouse has died, and current Logitech mice require a newer version of SetPoint for which UberOptions has not been updated. We've also seen MAF-Mouse (the developer indicated the version for 64-bit Windows does not support USB mice, yet) and Sakasa (while it works, commentary on the web indicate it tends to break randomly and often. It's also just a running program, so not system-wide.). I have seen some very sophisticated registry hacks. For example, I used to use a hack which would change the codes created by the F1-F12 keys when the F-Lock key was invented and defaulted to screwing my keyboard up. I'm hoping there's a way to flip X and Y in the registry; or some other, similar, system-level tweak out there. Another solution could be re-enabling the orientation feature for mice, as well as trackballs. It's very frustrating that input device drivers include the functionality we desperately need for an accessibilty concern, but it's been disabled in the name of making the drivers more idiot-proof.

    Read the article

  • New Exchange 2010 CAS cannot find domain controllers

    - by NorbyTheGeek
    I am experiencing problems migrating from Exchange 2003 to Exchange 2010. I am on the first step: installing a new 2010 Client Access Server role. The Active Directory domain functional level is 2003. All domain controllers are 2003 R2. The only existing Exchange 2003 server happens to be housed on one of the domain controllers. It is running Exchange 2003 Standard w/ SP2. IPv6 is enabled and working on all domain controllers, servers, and routers, including this new Exchange server. After installing the CAS role on a new 2008 R2 server (Hyper-V VM) I am receiving 2114 Events: Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1600). Topology discovery failed, error 0x80040a02 (DSC_E_NO_SUITABLE_CDC). Look up the Lightweight Directory Access Protocol (LDAP) error code specified in the event description. To do this, use Microsoft Knowledge Base article 218185, "Microsoft LDAP Error Codes." Use the information in that article to learn more about the cause and resolution to this error. Use the Ping or PathPing command-line tools to test network connectivity to local domain controllers. Prior to each, I receive the following 2080 Event: Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1600). Exchange Active Directory Provider has discovered the following servers with the following characteristics: (Server name | Roles | Enabled | Reachability | Synchronized | GC capable | PDC | SACL right | Critical Data | Netlogon | OS Version) In-site: b.company.intranet CDG 1 0 0 1 0 0 0 0 0 s.company.intranet CDG 1 0 0 1 0 0 0 0 0 Out-of-site: a.company.intranet CD- 1 0 0 0 0 0 0 0 0 o.company.intranet CD- 1 0 0 0 0 0 0 0 0 g.company.intranet CD- 1 0 0 0 0 0 0 0 0 Connectivity between the new Exchange server and all domain controllers via IPv4 and IPv6 are all working. I have verified that the new Exchange server is a member of the following groups: Exchange Servers Exchange Domain Servers Exchange Install Domain Servers Exchange Trusted Subsystem Heck, I even put the new Exchange server into Domain Admins just to see if it would help. It didn't. I can't find any evidence of Active Directory replication problems, all pre-setup Setup tasks (/PrepareLegacyExchangePermissions, /PrepareSchema, /PrepareAD, /PrepareDomain) completed successfully. The only problem so far that I haven't been able to resolve with my Active Directory is I am unable to get my IPv6 subnets into Sites and Services Where should I proceed from here?

    Read the article

  • IPv6 routing to another interface

    - by Robert
    I'm trying to get an IPv6 enabled router to forward data from one interface to the other and I'm having issues. When following this example (http://www.cisco.com/en/US/tech/tk872/technologies_configuration_example09186a0080ba6106.shtml) I am able to get full connectivity between all 3 routers in my simulator. However when I try to use only 1 router; I can't get connectivity to the other interfacs on the same router. My PC is directly attached to FA 0/1 and it can ping the router's interface. However it can not ping any other interface on the router(which unless I'm missing something it should be able to do). The router on the other hand can ping everything. I thought static routes might help; but the router already has routes for everything. I'm thinking the packet should come in; router looks up the destination in it's ipv6 routing table and then realizes it's for itself, and should respond. I thought maybe it couldn't respond directly; so I tried pinging a device like 2001:0000:0000:1000::2, but i don't get a response. I'm running on IOS 12.4. I'm missing something(hopefully simple), but I just can't see what it is. With only 1 router; how do I enable my PC to talk to the other subnets? Thank you in advance, Robert Topology: R1 FA 0/0: 2001:0000:0000:0000::1/52 FA 0/1: 2001:0000:0000:1000::1/52 FA 1/0: 2001:0000:0000:2000::1/52 Loopback 0: 2001:0000:0000:3000::1/52 PC: 2001:0000:0000:2000::2/52 PC plugs directly into FA 1/0 on the router. --- Configuration --- ipv6 cef ipv6 unicast routing interface Loopback0 no ip address ipv6 address 2001:0000:0000:3000::1/52 ipv6 enable ! interface FastEthernet0/0 no ip address duplex auto speed auto ipv6 address 2001:0000:0000::1/52 ipv6 enable ! interface FastEthernet0/1 no ip address duplex auto speed auto ipv6 address 2001:0000:0000:1000::1/52 ipv6 enable ! interface FastEthernet1/0 no ip address duplex auto speed auto ipv6 address 2001:0000:0000:2000::1/52 ipv6 enable --- end of config --- --- routing table --- IPV6Lab#show ipv6 route IPv6 Routing Table - 10 entries Codes: C - Connected, L - Local, S - Static, R - RIP, B - BGP U - Per-user Static route I1 - ISIS L1, I2 - ISIS L2, IA - ISIS interarea, IS - ISIS summary O - OSPF intra, OI - OSPF inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2 ON1 - OSPF NSSA ext 1, ON2 - OSPF NSSA ext 2 C 2001:0000:0000::/52 [0/0] via ::, FastEthernet0/0 L 2001:0000:0000::1/128 [0/0] via ::, FastEthernet0/0 C 2001:0000:0000:1000::/52 [0/0] via ::, FastEthernet0/1 L 2001:0000:0000:1000::1/128 [0/0] via ::, FastEthernet0/1 C 2001:0000:0000:2000::/52 [0/0] via ::, FastEthernet1/0 L 2001:0000:0000:2000::1/128 [0/0] via ::, FastEthernet1/0 C 2001:0000:0000:3000::/52 [0/0] via ::, Loopback0 L 2001:0000:0000:3000::1/128 [0/0] via ::, Loopback0 L FE80::/10 [0/0] via ::, Null0 L FF00::/8 [0/0] via ::, Null0 --- end of routing table ---

    Read the article

  • Windows server detected error with hard disk

    - by user53864
    We have hosting Windows server 2008 R2 and I am working as admin in small company. The server is hanging and restarting as the hard disk seems to be damaged due to power fluctutaion(though having inverter) as it's showing the below error message on server reboot: Problem detected with the hard disk Press any key to continue It's Seagate 1TB SATA hard disk and it's booting after pressing enter. So it's clear that the hard disk is dying. Yes, it's in warranty but it's fact that warranty won't recover the lincesed windows server 2008 and it's data. As it's booting now, I backed up required things and I am thinking to clone the entire hard disk. The first thing it striked me is checking on the Seagate site if any tool available for cloning and I found Seagate DiskWizard but not specified it for windows server 2008. Please anybody could help me giving your best ideas for the below: Urgently, What's the best way(free of cost) for me to clone in my case with the new same sized hard disk? It's a one time lincenced and I cannot use the same key again if I reinstall the server. Will the lincense be carried with new disk if cloned? else there is a way to contact Microsoft explaining the problem occurred, to obtain new key for no charge?. I want to take measure for future. How do I keep two disks in continuous sync? mirrored & raid are the only options converting the disks to dynamic? or is there a best way I could do with no additional charge?. Any help is greatly appreciated. Thank you! EDIT:1 I started cloning the disk with CloneZilla and it was going proper showing in GUI. But after some time there is no GUI but a black screen with some codes(looks like disk location numbers) going page by page(I have attached the screenshots below captured from my phone). Do you people think it's actually cloning?. I started in the morning and it's evening now. I left the office now to let it finish what it's trying to do and I'll go & check it tomorrow. Slowly lost hope, don't know what face it's going to show tomorrow. Any ideas?

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • Synergy 1.5 crash (OSX 10.6.8)

    - by Oliver
    THANKS FOR TAKING THE TIME TO READ THIS I recently installed Synergy 1.5 r2278 (for Mac OSX 10.6.8) and was using it fine for most of the day, then it decided to stop working (the only thing I changed systemwise was the screensaver - and then after it started crashing disabled it - to see if it would resolve). When I start Synergy (on the Mac - Client) it says: after about 5 seconds (and successfully connecting to the Server) "synergyc quit unexpectedly" Here is the crash log (w/ binery info removed - too long for post requirements) Process: synergyc [1026] Path: /Applications/Synergy.app/Contents/MacOS/synergyc Identifier: synergy Version: ??? (???) Code Type: X86 (Native) Parent Process: Synergy [1023] Date/Time: 2014-05-28 15:36:17.746 +0930 OS Version: Mac OS X 10.6.8 (10K549) Report Version: 6 Interval Since Last Report: 2144189 sec Crashes Since Last Report: 23 Per-App Interval Since Last Report: 10242 sec Per-App Crashes Since Last Report: 9 Anonymous UUID: 86D5A57C-13D4-470E-AC72-48ACDDDE5EB0 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 5 Application Specific Information: abort() called Thread 0: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x95cf3afa mach_msg_trap + 10 1 libSystem.B.dylib 0x95cf4267 mach_msg + 68 2 com.apple.CoreFoundation 0x95af02df __CFRunLoopRun + 2079 3 com.apple.CoreFoundation 0x95aef3c4 CFRunLoopRunSpecific + 452 4 com.apple.CoreFoundation 0x95aef1f1 CFRunLoopRunInMode + 97 5 com.apple.HIToolbox 0x93654e04 RunCurrentEventLoopInMode + 392 6 com.apple.HIToolbox 0x93654bb9 ReceiveNextEventCommon + 354 7 com.apple.HIToolbox 0x937dd137 ReceiveNextEvent + 83 8 synergyc 0x000356d0 COSXEventQueueBuffer::waitForEvent(double) + 48 9 synergyc 0x00010dd5 CEventQueue::getEvent(CEvent&, double) + 325 10 synergyc 0x00011fb0 CEventQueue::loop() + 272 11 synergyc 0x00044eb6 CClientApp::mainLoop() + 134 12 synergyc 0x0005c509 standardStartupStatic(int, char**) + 41 13 synergyc 0x000448a9 CClientApp::runInner(int, char**, ILogOutputter*, int (*)(int, char**)) + 137 14 synergyc 0x0005c4b0 CAppUtilUnix::run(int, char**) + 64 15 synergyc 0x000427df CApp::run(int, char**) + 63 16 synergyc 0x00006e65 main + 117 17 synergyc 0x00006dd9 start + 53 Thread 1: 0 libSystem.B.dylib 0x95d607da __sigwait + 10 1 libSystem.B.dylib 0x95d607b6 sigwait$UNIX2003 + 71 2 synergyc 0x00009583 CArchMultithreadPosix::threadSignalHandler(void*) + 67 3 libSystem.B.dylib 0x95d21259 _pthread_start + 345 4 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 2: 0 libSystem.B.dylib 0x95d21aa2 __semwait_signal + 10 1 libSystem.B.dylib 0x95d2175e _pthread_cond_wait + 1191 2 libSystem.B.dylib 0x95d212b1 pthread_cond_timedwait$UNIX2003 + 72 3 synergyc 0x00009476 CArchMultithreadPosix::waitCondVar(CArchCondImpl*, CArchMutexImpl*, double) + 150 4 synergyc 0x0002b18f CCondVarBase::wait(double) const + 63 5 synergyc 0x0002ce68 CSocketMultiplexer::serviceThread(void*) + 136 6 synergyc 0x0002d698 TMethodJob<CSocketMultiplexer>::run() + 40 7 synergyc 0x0002b8f4 CThread::threadFunc(void*) + 132 8 synergyc 0x00008f30 CArchMultithreadPosix::doThreadFunc(CArchThreadImpl*) + 80 9 synergyc 0x0000902a CArchMultithreadPosix::threadFunc(void*) + 74 10 libSystem.B.dylib 0x95d21259 _pthread_start + 345 11 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 3: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x95d1a382 kevent + 10 1 libSystem.B.dylib 0x95d1aa9c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x95d19f59 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x95d19cfe _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x95d19781 _pthread_wqthread + 390 5 libSystem.B.dylib 0x95d195c6 start_wqthread + 30 Thread 4: 0 libSystem.B.dylib 0x95d19412 __workq_kernreturn + 10 1 libSystem.B.dylib 0x95d199a8 _pthread_wqthread + 941 2 libSystem.B.dylib 0x95d195c6 start_wqthread + 30 Thread 5 Crashed: 0 libSystem.B.dylib 0x95d610ee __semwait_signal_nocancel + 10 1 libSystem.B.dylib 0x95d60fd2 nanosleep$NOCANCEL$UNIX2003 + 166 2 libSystem.B.dylib 0x95ddbfb2 usleep$NOCANCEL$UNIX2003 + 61 3 libSystem.B.dylib 0x95dfd6f0 abort + 105 4 libSystem.B.dylib 0x95d79b1b _Unwind_Resume + 59 5 synergyc 0x00008fd1 CArchMultithreadPosix::doThreadFunc(CArchThreadImpl*) + 241 6 synergyc 0x0000902a CArchMultithreadPosix::threadFunc(void*) + 74 7 libSystem.B.dylib 0x95d21259 _pthread_start + 345 8 libSystem.B.dylib 0x95d210de thread_start + 34 Thread 5 crashed with X86 Thread State (32-bit): eax: 0x0000003c ebx: 0x95d60f39 ecx: 0xb0288a7c edx: 0x95d610ee edi: 0x00521950 esi: 0xb0288ad8 ebp: 0xb0288ab8 esp: 0xb0288a7c ss: 0x0000001f efl: 0x00000247 eip: 0x95d610ee cs: 0x00000007 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x002fe000 Model: MacBook2,1, BootROM MB21.00A5.B07, 2 processors, Intel Core 2 Duo, 2.16 GHz, 2 GB

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • How to Create Auto Playlists in Windows Media Player 12

    - by DigitalGeekery
    Are you getting tired of the same old playlists in Windows Media Player? Today we’ll show you how to create dynamic auto playlists based on criteria you choose in WMP 12 in Windows 7. Auto Playlists In Library view, click on Create playlist dropdown arrow and select Create auto playlist. On the New Auto Playlist window type in a name for the playlist in the text box. Now we need to choose our criteria by which to filter your playlist. Select Click here to add criteria. For our example, we will create a playlist of songs that were added to the library in the last week from the Alternative genre. So, we will first select Date Added from the dropdown list. Many criteria will have addition options to configure. In the example below you will see that we have a few options to fine tune.   We will filter all the songs added to the library in the last 7 days. We will select Is After from the first dropdown list. Then select Last 7 Days from the second dropdown list. You can add multiple criteria to further filter your playlist. If you can’t find the criteria you are looking for, select “More” at the bottom of the dropdown list.   This will pull up a filter window with all the criteria. Select a filter and then click OK when finished.   From the Genre dropdown, we will select Alternative. If you’d like to add Pictures, Videos, or TV Shows to your auto playlists you can do so by selecting them from the dropdown list under And also include. You will then be able to select criteria for your pictures, videos, or TV shows from the dropdown list.   Finally, you can also add restrictions to your music such as the number of items, duration, or total size. We will limit the duration of our playlist to one hour by selecting Limit Total Duration To… Then type in 1 hour…Click OK.   Our library is automatically filtered and a playlist is created based on the criteria we selected. When additional songs are added to the Windows Media Player library, any of new songs that fit the criteria will automatically be added to the New Songs playlist. You can also save a copy of an auto playlist as a regular playlist. Switch to Playlists view by clicking Playlists from either the top menu or the navigation bar. Select the Play tab and then click Clear list to remove any tracks from the list pane.   Right-click on the playlist you want to save, select Add to, and then Play list. The songs from your auto playlist will appear as an Unsaved list on the list pane. Click Save list. Type in a name for your playlist. Your auto playlist will continue to change as you add or remove items from your Media Player library that meet the criteria you established. The new saved playlist we just created will stay as it is currently. Editing a Auto playlist is easy. Right-click on the playlist and select Edit. Now you are ready to enjoy your playlist. Conclusion Auto playlists are great way to keep your playlists fresh in Windows Media Player 12. Users can get creative and experiment with the wide variety of criteria to customize their listening experience. If you are new to playlists in Windows Media Player, you may want to check our our previous post on how to create custom playlists in Windows Media Player 12. Are you looking to get better sound from WMP 12? Take a look at how to improve playback using enhancements in Windows Media Player 12. Similar Articles Productive Geek Tips Create Custom Playlists in Windows Media Player 12Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxMake Windows Media Player Automatically Open in Mini Player ModeMake VLC Player Look like Windows Media Player 10 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Add Zune Desktop Player to Windows 7 Media Center

    - by DigitalGeekery
    Are you a Zune owner who prefers the Zune player for media playback? Today we’ll show you how to integrate the Zune player with WMC using Media Center Studio. You’ll need to download Media Center Studio and the Zune Desktop player software. (See download links below) Also, make sure you have Media Center closed. Some of the actions in Media Center Studio cannot be performed while WMC is open. Open Media Center Studio and click on the Start Menu tab at the top of the application.   Click the Application button. Here we will create an Entry Point for the Zune player so that we can add it to Media Center. Type in a name for your entry point in the title text box. This is the name that will appear under the tile when added to the Media Center start menu. Next, type in the path to the Zune player. By default this should be C:\Program Files\Zune\Zune.exe. Note: Be sure to use the original path, not a link to the desktop icon.   The Active image is the image that will appear on the tile in Media Center. If you wish to change the default image, click the Browse button and select a different image. Select Stop the currently playing media from the When launched do the following: dropdown list.  Otherwise, if you open Zune player from WMC while playing another form of media, that media will continue to play in the background.   Now we will choose a keystroke to use to exit the Zune player software and return to Media Center. Click on the the green plus (+) button. When prompted, press a key to use to the close the Zune player. Note: This may also work with your Media Center remote. You may want to set a keyboard keystroke as well as a button on your remote to close the program. You may not be able to set certain remote buttons to close the application. We found that the back arrow button worked well. You can also choose a keystroke to kill the program if desired. Be sure to save your work before exiting by clicking the Save button on the Home tab.   Next, select the Start Menu tab and click on the next to Entry points to reveal the available entry points. Find the Zune player tile in the Entry points area. We want to drag the tile out onto one of the menu strips on the start menu. We will drag ours onto the Extras Library strip. When you begin to drag the tile, green plus (+) signs will appear in between the tiles. When you’ve dragged the tile over any of the green plus signs, the  red “Move” label will turn to a blue “Move to” label. Now you can drop the tile into position. Save your changes and then close Media Center Studio. When you open Media Center, you should see your Zune tile on the start menu. When you select the Zune tile in WMC, Media Center will be minimized and Zune player will be launched. Now you can enjoy your media through the Zune player. When you close Zune player with the previously assigned keystroke or by clicking the “X” at the top right, Windows Media Center will be re-opened. Conclusion We found the Zune player worked with two different Media Center remotes that we tested. It was a times a little tricky at times to tell where you were when navigating through the Zune software with a remote, but it did work. In addition to managing your music, the Zune player is a nice way to add podcasts to your Media Center setup. We should also mention that you don’t need to actually own a Zune to install and use the Zune player software. Media Center Studio works on both Vista and Windows 7. We covered Media Center Studio a bit more in depth in a previous post on customizing the Windows Media Center start menu. Are you new to Zune player? Familiarize yourself a bit more by checking out some of our earlier posts like how to update your Zune player, and experiencing your music a whole new way with Zune for PC.   Downloads Zune Desktop Player download Media Center Studio download Similar Articles Productive Geek Tips How To Rip a Music CD in Windows 7 Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • Automatic Properties, Collection Initializers, and Implicit Line Continuation support with VB 2010

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the eighteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. A few days ago I blogged about two new language features coming with C# 4.0: optional parameters and named arguments.  Today I’m going to post about a few of my favorite new features being added to VB with VS 2010: Auto-Implemented Properties, Collection Initializers, and Implicit Line Continuation support. Auto-Implemented Properties Prior to VB 2010, implementing properties within a class using VB required you to explicitly declare the property as well as implement a backing field variable to store its value.  For example, the code below demonstrates how to implement a “Person” class using VB 2008 that exposes two public properties - “Name” and “Age”:   While explicitly declaring properties like above provides maximum flexibility, I’ve always found writing this type of boiler-plate get/set code tedious when you are simply storing/retrieving the value from a field.  You can use VS code snippets to help automate the generation of it – but it still generates a lot of code that feels redundant.  C# 2008 introduced a cool new feature called automatic properties that helps cut down the code quite a bit for the common case where properties are simply backed by a field.  VB 2010 also now supports this same feature.  Using the auto-implemented properties feature of VB 2010 we can now implement our Person class using just the code below: When you declare an auto-implemented property, the VB compiler automatically creates a private field to store the property value as well as generates the associated Get/Set methods for you.  As you can see above – the code is much more concise and easier to read. The syntax supports optionally initializing the properties with default values as well if you want to: You can learn more about VB 2010’s automatic property support from this MSDN page. Collection Initializers VB 2010 also now supports using collection initializers to easily create a collection and populate it with an initial set of values.  You identify a collection initializer by declaring a collection variable and then use the From keyword followed by braces { } that contain the list of initial values to add to the collection.  Below is a code example where I am using the new collection initializer feature to populate a “Friends” list of Person objects with two people, and then bind it to a GridView control to display on a page: You can learn more about VB 2010’s collection initializer support from this MSDN page. Implicit Line Continuation Support Traditionally, when a statement in VB has been split up across multiple lines, you had to use a line-continuation underscore character (_) to indicate that the statement wasn’t complete.  For example, with VB 2008 the below LINQ query needs to append a “_” at the end of each line to indicate that the query is not complete yet: The VB 2010 compiler and code editor now adds support for what is called “implicit line continuation support” – which means that it is smarter about auto-detecting line continuation scenarios, and as a result no longer needs you to explicitly indicate that the statement continues in many, many scenarios.  This means that with VB 2010 we can now write the above code with no “_” at all: The implicit line continuation feature also works well when editing XML Literals within VB (which is pretty cool). You can learn more about VB 2010’s Implicit Line Continuation support and many of the scenarios it supports from this MSDN page (scroll down to the “Implicit Line Continuation” section to find details). Summary The above three VB language features are but a few of the new language and code editor features coming with VB 2010.  Visit this site to learn more about some of the other VB language features coming with the release.  Also subscribe to the VB team’s blog to learn more and stay up-to-date with the posts they the team regularly publishes. Hope this helps, Scott

    Read the article

  • XNA Notes 008

    - by George Clingerman
    This week has been a rough one. I’ve been sick and then in some kind of slump for my afternoon coding sessions. It could be from the cold, could be I’m still tired from writing that Windows Phone 7 game development book (which is out now!) or it could just be I’m tired of winter and want some sunshine. All I know is that even while I’m stick, the XNA world keeps going along at it’s whirlwind pace. Below are the things I caught in between my coughing fits.. Time Critical XNA News: The 2011 MVP summit is almost here so pass along your feelings and thoughts so the MVPs can take them and share them with the team in person http://forums.create.msdn.com/forums/p/76317/464136.aspx#464136 Dream Build Play - there’s no new announcement yet, but you can’t get much more to the end of February than this! http://www.dreambuildplay.com/Main/Home.aspx XNA Team: Dean Johnson from the XNA team shares an excellent way of handling Guide.IsTrialMode on WP7 http://blogs.msdn.com/b/dejohn/archive/2011/02/21/calling-guide-istrialmode-on-windows-phone-7.aspx Nick Gravelyn tries a new tactic in deciding if there’s enough interest to develop a sequel or not. Don’t YOU want Pixel Man 2 to come out? http://nickgravelyn.com/pixelman2/ XNA MVPs: Andy “The ZMan” Dunn finally shares what he’s been secretly working on these past 4 months http://twitter.com/#!/The_Zman/status/40590269392887808 http://www.youtube.com/watch?v=Rg8Z0ZdYbvg&feature=youtu.be Joel Martinez lets developers around NYC know they should by signing up for Game Hack Day http://twitter.com/joelmartinez/statuses/41118590862102528 http://gamehackday.org/71fdk XNA Developers: Michael McLaughlin shares an XNA RenderTarget2D Sample http://geekswithblogs.net/mikebmcl/archive/2011/02/18/xna-rendertarget2d-sample.aspx Martin Caine starts a new series on Deferred Rendering in XNA 4.0 http://twitter.com/#!/MartinCaine/status/39735221339291648 http://martincaine.com/xna/deferred_rendering_in_xna_4_introduction ElemenyCy posts about his fun time with the IntermediateSerializer http://www.ubergamermonkey.com/xna/holy-bloated-xml-batman/ Ben Kane releases a narrated dev diary video for Project Splice. Let him know if you’d like to see more! (I know I do!) http://twitter.com/#!/benkane/status/39846959498002432 http://www.youtube.com/watch?v=1EmziXZUo08&feature=youtu.be Jason Swearingen (of Novaleaf) posts his part 1 of Spatial Partitioning solutions http://altdevblogaday.org/2011/02/21/spatial-partitioning-part-1-survey-of-spatial-partitioning-solutions/ Brian Lawson of Dark Flow Studios shares what his been up to lately with lots of pretty screenshots and hints of announcements from Microsoft... http://www.darkflowstudios.com/entry/short-and-sweet-part-1 Luke Avery starts a new blog where he plans on making XNA tutorials for beginners (and he’s got a few started already!) http://programmingwithovery.wordpress.com/ Xbox LIVE Indie Games (XBLIG): GameMarx Episode 10 http://www.gamemarx.com/video/the-show/24/ep-10-february-18-2010.aspx Minecraft clone FortressCraft coming to XBLIG http://www.eurogamer.net/articles/2011-02-23-minecraft-clone-fortresscraft-hits-xblig ezMuze+ starts an IndieGoGo fundraiser campaign to help fund their second game and get it onto even more devices! http://www.indiegogo.com/ezmuze Gamergeddon XBLIG round up http://www.gamergeddon.com/2011/02/20/xbox-indie-game-round-up-february-20th/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter JForce Games loses their Ego http://jforcegames.com/blog/index.php?itemid=121&catid=4 XNA Game Development: @BallerIndustry reminds all XNA developers that the Maths are important ;) http://twitter.com/#!/BallerIndustry/status/39317618280243200 http://www.youtube.com/watch?v=MjV3XDFsjP4&feature=player_embedded#at=106 @suhinini stumbles on an older but extremely useful post on XNA Content Pipeline debugging http://twitter.com/#!/suhinini/status/39270189476352000 http://badcorporatelogo.wordpress.com/2010/10/31/xna-content-pipeline-debugging-4-0/ XNA Game Development Workshops at Singapore Universities http://innovativesingapore.com/2011/02/xna-game-development-workshops-at-singapore-universities/ Indiefreaks announces that IGF v0.3 is out with Xbox 360 support, SunBurn 2.0.12 and it’s now Open Source! http://twitter.com/#!/indiefreaks/status/39391953971982336 @liotral announces a new series on properly designing a game http://twitter.com/#!/liortal53/status/39466905081217024 http://liortalblog.wordpress.com/2011/02/20/hello-cosmos/ Indies and XNA at CodeStock 2011 http://www.gamemarx.com/news/2011/02/20/indies-and-xna-at-codestock-2011.aspx Train Frontier Express posts about XNA Content Hotloading http://trainfrontierexpress.blogspot.com/2011/02/xna-content-hotloading-overview.html Slyprid announces a new character editor in Transmute http://twitter.com/#!/slyprid/status/40146992818696192 http://www.youtube.com/watch?v=OKhFAc78LDs&feature=youtu.be The XNA 2D from the ground up tutorial series http://xna-uk.net/blogs/darkgenesis/archive/2011/02/23/recap-the-xna-2d-from-the-ground-up-tutorial-series.aspx Sgt.Conker posts a “Clingerman” (hey that’s me!) to stay relevant http://www.sgtconker.com/2011/02/posting-a-clingerman-to-stay-relevant/

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • Request Limit Length Limits for IIS&rsquo;s requestFiltering Module

    - by Rick Strahl
    Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless. However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this: http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong? If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so: <configuration> <security> <requestFiltering> <requestLimits maxQueryString="2048"> </requestLimits> </requestFiltering> </security> </configuration> This fixed me right up and made the requests work. How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down. Finding the Problem The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it. If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page. The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page. Using Failed Request Tracing to retrieve Error Info The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this: Find the Failed Request Tracing Rules in the IIS Service Manager.   Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down. Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site. These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:   Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive. Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries… Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  Security  

    Read the article

  • How to Access a Windows Desktop From Your Tablet or Phone

    - by Chris Hoffman
    iPads and Android tablets can’t run Windows apps locally, but they can access a Windows desktops remotely — even with a physical keyboard. In a pinch, the same tricks can be used to access a Windows desktop from a smartphone. Microsoft recently launched their own official Remote Desktop app for iOS and Android devices. Microsoft’s official apps are primarily useful for businesses — if you’re a typical home user, you’ll want to use a different remote desktop solution. Microsoft’s Remote Desktop App Microsoft now offers official Remote Desktop apps for iPad and iPhone as well as Android tablets and smartphones. The apps use Microsoft’s RDP protocol to connect to remote Windows systems. They’re essentially just new clients for the Remote Desktop feature that has been included in Windows for more than a decade. There are big problems with these apps if you’re an average home user. Microsoft’s Remote Desktop server is not available on standard or Home versions of Windows, only Professional and Enterprise editions. If you do have the appropriate edition of Windows, you’ll have to set up port-forwarding and a dynamic DNS service if you want to access your Windows desktop from outside your local network. You could also set up a VPN — either way you’ll need to do some footwork. This app is a gift to businesses who are already using Remote Desktop and enthusiasts who have the more expensive versions of Windows and don’t mind the configuration process. To set this up, follow our guide to setting up Remote Desktop for Internet access and connect using the Remote Desktop app instead of traditional Remote Desktop clients. TeamViewer If you have the standard edition of Windows or you just don’t want to mess around with port-forwarding and dynamic DNS configuration, you’ll want to skip Remote Desktop and use something else. We like TeamViewer for this. Just as it’s a great way to remotely troubleshoot your relatives’ computers, it’s also a great way to remotely access your own computer. It doesn’t have the same limitations Microsoft’s Remote Desktop system has — it’s completely free for personal use, runs on any edition of Windows, and is easy to set up. There’s no messing around with port-forwarding or dynamic DNS configuration. To get started, just download and run the TeamViewer program on your computer. You can get started with it immediately, but you’ll want to set up unattended access to connect remotely without using the codes displayed on your screen. To connect, just install the TeamViewer mobile app and log in with the details the TeamViewer window displays. TeamViewer also offers software that runs on Mac and Linux, so you can remote-control other types of computers from your tablet. Other Options Microsoft’s Remote Desktop app and TeamViewer aren’t the only options, of course. There are a variety of different apps and services built for this. Splashtop is another fairly popular remote desktop solution that some people report as being faster. Unfortunately, it’s not entirely free — the iPad and iPhone app costs $20 at regular price. To use it over the Internet, you’ll have to purchase an additional “Anywhere Access Pack.” If you’re frustrated with TeamViewer’s speed and you don’t mind spending money, you may want to try Splashtop instead. As always, you could use any VNC server along with a VNC client app. VNC is the do-it-yourself solution — it’s an open protocol. Unlike Microsoft’s RDP protocol, you can install a VNC server of your own, configure it how you like, and use any mobile VNC client app. This is more flexible because you can install a VNC server on any edition of Windows or even non-Windows operating systems, but it otherwise has all the same issues — you have to worry about port-forwarding, setting up dynamic DNS, and securing your VNC server. Keep an eye on Chrome Remote Desktop. Chrome already offers a built-in remote desktop feature that allows you to remotely control your PC from another Windows, Mac, Linux, or Chrome OS device. Google is rumored to be building an Android app for Chrome Remote Desktop, which would allow you to easily access a computer running Chrome from Android tablets. Google’s solution is much more user-friendly for average people than Microsoft’s Remote Desktop solution, which is clearly geared towards businesses. Chrome Remote Desktop just requires signing in with a Google account. Remote desktop solutions like Microsoft’s Remote Desktop app and TeamViewer are also available for Windows tablets. On Windows RT devices like the Surface RT and Surface 2, they allow you to use the full Windows desktop that’s unavailable on your tablet.     

    Read the article

  • Understanding EDI 997.

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout: No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Understanding EDI 997

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout:   No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >