Search Results

Search found 9930 results on 398 pages for 'exec maven plugin'.

Page 353/398 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Generate MetaData with ANT

    - by Neil Foley
    I have a folder structure that contains multiple javascript files, each of these files need a standard piece of text at the top = //@include "includes.js" Each folder needs to contain a file named includes.js that has an include entry for each file in its directory and and entry for the include file in its parent directory. I'm trying to achive this using ant and its not going too well. So far I have the following, which does the job of inserting the header but not without actually moving or copying the file. I have heard people mentioning the <replace> task to do this but am a bit stumped. <?xml version="1.0" encoding="UTF-8"?> <project name="JavaContentAssist" default="start" basedir="."> <taskdef resource="net/sf/antcontrib/antcontrib.properties"> <classpath> <pathelement location="C:/dr_workspaces/Maven Repository/.m2/repository/ant-contrib/ant-contrib/20020829/ant-contrib-20020829.jar"/> </classpath> </taskdef> <target name="start"> <foreach target="strip" param="file"> <fileset dir="${basedir}"> <include name="**/*.js"/> <exclude name="**/includes.js"/> </fileset> </foreach> </target> <target name="strip"> <move file="${file}" tofile="${a_location}" overwrite="true"> <filterchain> <striplinecomments> <comment value="//@" /> </striplinecomments> <concatfilter prepend="${basedir}/header.txt"> </concatfilter> </filterchain> </move> </target> </project> As for the generation of the include files in the dir I'm not sure where to start at all. I'd appreciate if somebody could point me in the right direction.

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • How do I control the direction of the scroll on my coda slider?

    - by lightingwrist
    Hello, I have a coda slider and am unable to determine the direction of the scroll. I have 3 panels that I want to scroll left to right. Sometimes it scrolls left to right, sometimes up and down, and sometimes horizontally. How do I lock it down to go the direction I want? Here is the HTML: <body> <div id="slider_home" class="round_10px"> <ul class="navigation_home"> <li><a href="#scroll_Parents" class="round_10px">Information For Parents</a></li> <li><a href="#scroll_Materials" class="round_10px">Print Materials</a></li> <li><a href="#scroll_Resources" class="round_10px">Online Resources</a></li> </ul> <div id="scroll_bg_home"> <div class="scroll_home"> <div class="scrollContainer_home"> <div class="panel_home" id="scroll_Parents"> content </div> <div class="panel_home" id="scroll_Materials"> content </div> <div class="panel_home" id="scroll_Resources"> content </div> </div> </div> </div> </div> </body> Here is the CSS: #wrapper {width:550px;margin:0px auto;} #intro {padding-bottom:10px;} h2 {margin:0;margin-bottom:14px;padding:0;} #slider {width:631px;margin:10px auto 0px auto;position:relative;} #scroll_bg{height:360px;width:590px;overflow:hidden;position:relative;clear:left;background:#FFFFFF url(images/) no-repeat; margin:0px auto 0px auto} .scroll{ background:transparent; width:550px; height:370px; padding:0px; margin:0px auto; overflow:hidden; } .scrollContainer div.panel {padding:20px 0px;height:330px; width:550px;margin:0px;float:left;} #shade {background:#EDEDEC url(images/shade.jpg) no-repeat 0 0;height:50px;} ul.navigation {list-style:none;margin:0px 0px 0px 23px;padding:0px;padding-bottom:0px;} ul.navigation li {display:inline; margin-right:5px;} ul.navigation li a { background:#FFFFFF;font-family:Arial, Helvetica, sans-serif; font-size:16px; font-weight:bold; color:#CCCCCC;padding:5px 5px 5px 5px;border:1px #F4F4F4 solid;text-decoration: none;} ul.navigation a:hover { color:#EDEDEC;border:1px #E6E6E6 solid;} ul.navigation a.selected {color:#333333;} ul.navigation a:focus {outline:none;} .scrollButtons {position:absolute;top:150px;cursor:pointer;} .scrollButtons.left {left:-37px;top:20px} .scrollButtons.right {right:-32px;top:20px;} .hide {display:none;} And here is the Jquery includes file: // when the DOM is ready... $(document).ready(function () { var $panels = $('#slider_home .scrollContainer_home > div.panel_home'); var $container = $('#slider_home .scrollContainer_home'); // if true, we'll float all the panels left and fix the width // of the container var horizontal = true; // float the panels left if we're going horizontal if (horizontal) { $panels.css({ 'float' : 'left', 'position' : 'relative' // IE fix to ensure overflow is hidden }); // calculate a new width for the container (so it holds all panels) $container.css('width', $panels[0].offsetWidth * $panels.length); } // collect the scroll object, at the same time apply the hidden overflow // to remove the default scrollbars that will appear var $scroll_bg = $('#scroll_bg_home'); var $scroll = $('#slider_home .scroll_home').css('overflow', 'hidden'); // apply our left + right buttons $scroll_bg .before('<img class="scrollButtons_home left" src="styles/images/BackFlip.jpg" />') .after('<img class="scrollButtons_home right" src="styles/images/flipForward.jpg" />'); // handle nav selection function selectNav() { $(this) .parents('ul:first') .find('a') .removeClass('selected') .end() .end() .addClass('selected'); } $('.navigation_home').find('a').click(selectNav); // go find the navigation link that has this target and select the nav function trigger(data) { var el = $('.navigation_home').find('a[href$="' + data.id + '"]').get(0); selectNav.call(el); } if (window.location.hash) { trigger({ id : window.location.hash.substr(1) }); } else { $('.navigation_home a:first').click(); } // offset is used to move to *exactly* the right place, since I'm using // padding on my example, I need to subtract the amount of padding to // the offset. Try removing this to get a good idea of the effect var offset = parseInt((horizontal ? $container.css('paddingTop') : $container.css('paddingLeft')) || 0) * -1; var scrollOptions = { target: $scroll, // the element that has the overflow // can be a selector which will be relative to the target items: $panels, navigation: '.navigation_home a', // selectors are NOT relative to document, i.e. make sure they're unique prev: 'img.left', next: 'img.right', // allow the scroll effect to run both directions axis: 'xy', onAfter: trigger, // our final callback offset: offset, // duration of the sliding effect duration: 500, // easing - can be used with the easing plugin: // http://gsgd.co.uk/sandbox/jquery/easing/ easing: 'swing' }; // apply serialScroll to the slider - we chose this plugin because it // supports// the indexed next and previous scroll along with hooking // in to our navigation. $('#slider_home').serialScroll(scrollOptions); // now apply localScroll to hook any other arbitrary links to trigger // the effect $.localScroll(scrollOptions); // finally, if the URL has a hash, move the slider in to position, // setting the duration to 1 because I don't want it to scroll in the // very first page load. We don't always need this, but it ensures // the positioning is absolutely spot on when the pages loads. scrollOptions.duration = 1; $.localScroll.hash(scrollOptions); });

    Read the article

  • Init script & the green [ OK ]

    - by Lord Loh.
    I am trying to install fast-cgi for nginx on an EC2 instance. I followed the steps explained here, but that is meant for Debian and does not work out of the box for a red-hat based system. I modified the script a bit to look like - #!/bin/bash ### BEGIN INIT INFO # Provides: php-fcgi # Required-Start: $nginx # Required-Stop: $nginx # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts php over fcgi # Description: starts php over fcgi ### END INIT INFO . /etc/rc.d/init.d/functions (( EUID )) && echo .You need to have root priviliges.. && exit 1 BIND=/tmp/php.socket USER=nginx PHP_FCGI_CHILDREN=15 PHP_FCGI_MAX_REQUESTS=1000 PHP_CGI=/usr/bin/php-cgi PHP_CGI_NAME=`basename $PHP_CGI` PHP_CGI_ARGS="- USER=$USER PATH=/usr/bin PHP_FCGI_CHILDREN=$PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=$PHP_FCGI_MAX_REQUESTS $PHP_CGI -b $BIND" RETVAL=0 start() { echo -n "Starting PHP FastCGI: " #ORIGINAL LINE #daemon $PHP_CGI --quiet --start --background --chuid "$USER" --exec /usr/bin/env -- $PHP_CGI_ARGS #MODIFIED LINE daemon --user=$USER $PHP_CGI -b $BIND& RETVAL=$? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/php-fcgi #echo "$PHP_CGI_NAME." } stop() { echo -n "Stopping PHP FastCGI: " killall -q -w -u $USER $PHP_CGI RETVAL=$? echo "$PHP_CGI_NAME." rm /var/lock/subsys/php-fcgi } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; *) echo "Usage: php-fastcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL The problem I have now is - service php-fcgi start keeps the shell blocked. If I run service php-fcgi start & and then ps aux, I see the php-cgi process running bound to the socket. I see the start command stop only when I execute service php-fcgi stop. How do I solve this blocking issue? I have tried adding an & at the end of the line spawning the daemon. But other scripts do not seem to be doing this. This is the most complicated script I am attempting to modify yet :-( How do I get the script to display the green [ OK ]? I checked scripts like httpd and saw that all they were doing was something as shown below. But I never see a green [ OK ] when I execute php-fcgi. I also discovered that putting echo_success with functions sourced displays the green [ OK ] but I do not see any other scripts in the /etc/rc.d/init.d/ executing echo_success or echo_failure. What have I got wrong? Also, How do i specify PHP_FCGI_CHILDREN with daemon? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/

    Read the article

  • Puppet and launchd services?

    - by Joel Westberg
    We have a production environment configured with Puppet, and want to be able to set up a similar environment on our development machines: a mix of Red Hats, Ubuntus and OSX. As might be expected, OSX is the odd man out here, and sadly, I'm having a lot of trouble with getting this to work. My first attempt was using macports, using the following declaration: package { 'rabbitmq-server': ensure => installed, provider => macports, } but this, sadly, generates the following error: Error: /Stage[main]/Rabbitmq/Package[rabbitmq-server]: Could not evaluate: Execution of '/opt/local/bin/port -q installed rabbitmq-server' returned 1: usage: cut -b list [-n] [file ...] cut -c list [file ...] cut -f list [-s] [-d delim] [file ...] while executing "exec dscl -q . -read /Users/$env(SUDO_USER) NFSHomeDirectory | cut -d ' ' -f 2" (procedure "mportinit" line 95) invoked from within "mportinit ui_options global_options global_variations" Next up, I figured I'd give homebrew a try. There is no package provider available by default, but puppet-homebrew seemed promising. Here, I got much farther, and actually managed to get the install to work. package { 'rabbitmq': ensure => installed, provider => brew, } file { "plist": path => "/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist", source => "/usr/local/opt/rabbitmq/homebrew.mxcl.rabbitmq.plist", ensure => present, owner => root, group => wheel, mode => 0644, } service { "homebrew.mxcl.rabbitmq": enable => true, ensure => running, provider => "launchd", require => [ File["/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist"] ], } Here, I don't get any error. But RabbitMQ doesn't start either (as it does if I do a manual load with launchctl) [... snip ...] Debug: Executing '/bin/launchctl list' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /var/db/launchd.db/com.apple.launchd/overrides.plist' Debug: /Schedule[weekly]: Skipping device resources because running on a host Debug: /Schedule[puppet]: Skipping device resources because running on a host Debug: Finishing transaction 2248294820 Debug: Storing state Debug: Stored state in 0.01 seconds Finished catalog run in 25.90 seconds What am I doing wrong?

    Read the article

  • java web application best practices

    - by Bruce
    Hi all I'm trying to figure out the optimum way to develop and release a fairly simple web application, and I'm running into several problems. I'll outline the decisions I've made, because somewhere I've clearly gone off the rails.. Hugely grateful for any help! I have what I think is a fairly simple web application. It contains a couple of jsps that reference a couple of java beans, and the usual static html, js, css and images. Decision 1) I wanted to have a clear and clean release procedure, such that I could develop on my local machine and then release reliably to a production machine. I therefore made the decision to package the application into a war file (including all the static resources), to minimize the separate bits and pieces I would need to release. So far so good? Decision 2) I wanted things on my local machine to be as similar as possible to the production environment. So in my html, for example, I may have a reference to a static file such as http://static.foo.com/file . To keep this code working seamlessly on dev and prod, I decided to put static.foo.com in my /etc/hosts when developing locally, so that all the urls work correctly without changing anything. Decision 3) I decided to use eclipse and maven to give me a best practice environment for administering and building my project. So I have a nice tight set up now, except that: Every time I want to change anything in development, like one line in an html file, I have to rebuild the entire project and then wait for tomcat to load the war before I can see if it's what I wanted. So my questions are: 1) Is there a way to connect up eclipse and tomcat so that I don't have to rebuild the war each time? ie tomcat is looking straight at my actual workspace to serve up the static files? 2)I think I'm maybe making things harder by using /etc/hosts to reflect production urls - is there a better way that doesn't involve manually changing over urls (relative urls are fine of course, but where you have many subdomains, say one for static files and one for dynamic, you have to write out the full path, surely?) 3) Is this really best practice?? How do people set things up so that they balance the requirement for an automated, all-encompassing build process on the one hand, and the speed and flexibility to be able to develop javascript and html and css quickly, as quickly as if one just pointed apache at the directory and developed live? What do people find works? Many thanks!

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Two-Hop SSH connection with two separate public keys

    - by yigit
    We have the following ssh hop setup: localhost -> hub -> server hubuser@hub accepts the public key for localuser@localhost. serveruser@server accepts the public key for hubuser@hub. So we are issuing ssh -t hubuser@hub ssh serveruser@server for connecting to server. The problem with this setup is we can not scp directly to the server. I tried creating .ssh/config file like this: Host server user serveruser port 22 hostname server ProxyCommand ssh -q hubuser@hub 'nc %h %p' But I am not able to connect (yigit is localuser): $ ssh serveruser@server -v OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /home/yigit/.ssh/config debug1: /home/yigit/.ssh/config line 19: Applying options for server debug1: Reading configuration data /etc/ssh/ssh_config debug1: Executing proxy command: exec ssh -q hubuser@hub 'nc server 22' debug1: permanently_drop_suid: 1000 debug1: identity file /home/yigit/.ssh/id_rsa type 1000 debug1: identity file /home/yigit/.ssh/id_rsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_dsa type -1 debug1: identity file /home/yigit/.ssh/id_dsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA cb:ee:1f:78:82:1e:b4:39:c6:67:6f:4d:b4:01:f2:9f debug1: Host 'server' is known and matches the ECDSA host key. debug1: Found key in /home/yigit/.ssh/known_hosts:33 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/yigit/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/yigit/.ssh/id_dsa debug1: Trying private key: /home/yigit/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). Notice that it is trying to use the public key localuser@localhost for authenticating on server and fails since it is not the right one. Is it possible to modify the ProxyCommand so that the key for hubuser@hub is used for authenticating on server?

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • Trying to add data to sql from link click and return results via jquery or ajax

    - by Jay Schires
    I am not familiar with jquery or ajax, but i do know it is whats needed to perform the action I want. I have created a wordpress plugin that updates a database table based on the users click. Right now it refreshes the page to return the results, but I want to stop the page refresh and return data via ajax I believe. If anyone is interested in helping me figure this out I would be very appreciative or even willing to pay. Thanks! Here is the plugin code: function BoardLikeItGetDelim($postid) { global $wp_rewrite; if($wp_rewrite->using_permalinks()) { if(isset($_GET['mbpost'])) return "?mbpost=".$postid."&"; return "?"; } else { if(isset($_GET['mbpost'])) return "&mbpost=".$postid."&"; return "&"; } } function AddBoardLikeItButton($postid) { global $user_ID; if(isset($_GET['board-like-it-action']) && $_GET['board-like-it-action'] == "like" && $_GET['bpid'] == $postid) BoardLikeItLike($user_ID, $_GET['bpid']); if(isset($_GET['board-like-it-action']) && $_GET['board-like-it-action'] == "unlike" && $_GET['bpid'] == $postid) BoardLikeItUnLike($user_ID, $_GET['bpid']); $num_likes = BoardLikeItGetNumLikes($postid); if(!BoardLikeItIsLiked($user_ID, $postid)) echo "<HREF LINK='".BoardLikeItGetDelim($postid)."board-like-it-action=like&bpid=".$postid."#mngl-board-post-message-".$postid."'>Like</a> ".$num_likes."" . "<br/>"; else echo "<HREF LINK ='".BoardLikeItGetDelim($postid)."board-like-it-action=unlike&bpid=".$postid."#mngl-board-post-message-".$postid."'>Un-Like</a> " . "<br/><span style='display: inline-block; padding: 0px; bottom: -5px; position: relative; border: 0px;'><IMAGE='". get_bloginfo('wpurl')."/wp-content/plugins/board-like-it/top-up.png' /></span><div style='-moz-border-radius: 4px; -khtml-border-radius: 4px; -webkit-border-radius: 4px; font-family: Verdana, Geneva, sans-serif; font-size: 10px; color: #000; background-color: #B8C9DB; width: 90%; margin: 0px; display: block; padding-top: 4px; padding-right: 5px; padding-bottom: 4px; padding-left: 6px;'>" . "<IMAGE='". get_bloginfo('wpurl')."/wp-content/plugins/board-like-it/thumb_up.png'/> " .BoardLikeItShowLikers($postid). "like this." . "</div>"; } function BoardLikeItShowLikers($postid) { global $wpdb; $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',', $result); $names = ""; if($results[0] != "") foreach($results as $r) { $userinfo = get_usermeta($r, 'user_login'); $names .= $userinfo.", "; } return $names; } function BoardLikeItGetNumLikes($postid) { global $wpdb; $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',', $result); if($results[0] != '') return count($results)."<br/><span style='display: inline-block; padding: 0px; bottom: -5px; position: relative; border: 0px;'><IMAGE='". get_bloginfo('wpurl')."/wp-content/plugins/board-like-it/top-up.png' /></span><div style='-moz-border-radius: 4px; -khtml-border-radius: 4px; -webkit-border-radius: 4px; font-family: Verdana, Geneva, sans-serif; font-size: 10px; color: #000; background-color: #B8C9DB; width: 90%; margin: 0px; display: inline-block; border: 0px; padding-top: 0px; padding-right: 5px; padding-bottom: 1px; padding-left: 6px;'>" . "<IMAGE='". get_bloginfo('wpurl')."/wp-content/plugins/board-like-it/thumb_up.png'/> " .BoardLikeItShowLikers($postid). "likes this." . "</div>"; else return ""; } function BoardLikeItLike($user_ID, $postid) { global $wpdb; $likers = array(); $likersnew = array(); $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',',$result); if($results[0] != "") { if(!in_array($user_ID, $results)) $results[] = $user_ID; $likers = implode(',',$results); $wpdb->query($wpdb->prepare("UPDATE ".BoardLikeItGetDBName()." SET `likers` = '{$likers}' WHERE `mngl_id` = {$postid}")); } else { $likersnew[] = $user_ID; $likersnew = implode(',',$likersnew); $wpdb->query($wpdb->prepare("INSERT INTO ".BoardLikeItGetDBName()." (`mngl_id`, `likers`) VALUES ('{$postid}', '{$likersnew}')")); } } function BoardLikeItUnLike($user_ID, $postid) { global $wpdb; $likers = array(); $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',', $result); if(in_array($user_ID, $results)) { $results = BoardLikeItRemoveFromArray($results, $user_ID); if(!empty($results)) { $likers = implode(',', $results); $wpdb->query($wpdb->prepare("UPDATE ".BoardLikeItGetDBName()." SET `likers` = '{$likers}' WHERE `mngl_id` = {$postid}")); } else { $wpdb->query($wpdb->prepare("DELETE FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); } } } function BoardLikeItIsLiked($user_ID, $postid) { global $wpdb; $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',', $result); if(in_array($user_ID, $results)) return true; else return false; } function BoardLikeItActivate() { global $wpdb; $charset_collate = ''; if($wpdb->has_cap('collation')) { if(!empty($wpdb->charset)) $charset_collate = "DEFAULT CHARACTER SET $wpdb->charset"; if(!empty($wpdb->collate)) $charset_collate .= " COLLATE $wpdb->collate"; } $table_sql = "CREATE TABLE ".BoardLikeItGetDBName()."( `mngl_id` int(11) NOT NULL, `likers` longtext NOT NULL, PRIMARY KEY (`mngl_id`)) {$charset_collate};"; require_once(ABSPATH.'wp-admin/includes/upgrade.php'); dbDelta($table_sql); } function BoardLikeItGetDBName() { global $wpdb; return $wpdb->prefix."board_like_it"; } function BoardLikeItRemoveFromArray($arr, $key) { $new = array(); foreach($arr as $j => $i) { if($i != $key) $new[] = $i; } return $new; }

    Read the article

  • add a from to backup routine

    - by Gerard Flynn
    hi how do you put a process bar and button onto this code i have class and want to add a gui on to the code using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Data.SqlClient; using System.IO; using System.Threading; using Tamir.SharpSsh; using System.Security.Cryptography; using ICSharpCode.SharpZipLib.Checksums; using ICSharpCode.SharpZipLib.Zip; using ICSharpCode.SharpZipLib.GZip; namespace backup { public partial class Form1 : Form { public Form1() { InitializeComponent(); } /// <summary> /// Summary description for Class1. /// </summary> public class Backup { private string dbName; private string dbUsername; private string dbPassword; private static string baseDir; private string backupName; private static bool isBackup; private string keyString; private string ivString; private string[] backupDirs = new string[0]; private string[] excludeDirs = new string[0]; private ZipOutputStream zipOutputStream; private string backupFile; private string zipFile; private string encryptedFile; static void Main() { Backup.Log("BackupUtility loaded"); try { new Backup(); if (!isBackup) MessageBox.Show("Restore complete"); } catch (Exception e) { Backup.Log(e.ToString()); if (!isBackup) MessageBox.Show("Error restoring!\r\n" + e.Message); } } private void LoadAppSettings() { this.backupName = System.Configuration.ConfigurationSettings.AppSettings["BackupName"].ToString(); this.dbName = System.Configuration.ConfigurationSettings.AppSettings["DBName"].ToString(); this.dbUsername = System.Configuration.ConfigurationSettings.AppSettings["DBUsername"].ToString(); this.dbPassword = System.Configuration.ConfigurationSettings.AppSettings["DBPassword"].ToString(); //default to using where we are executing this assembly from Backup.baseDir = System.Reflection.Assembly.GetExecutingAssembly().Location.Substring(0, System.Reflection.Assembly.GetExecutingAssembly().Location.LastIndexOf("\\")) + "\\"; Backup.isBackup = bool.Parse(System.Configuration.ConfigurationSettings.AppSettings["IsBackup"].ToString()); this.keyString = System.Configuration.ConfigurationSettings.AppSettings["KeyString"].ToString(); this.ivString = System.Configuration.ConfigurationSettings.AppSettings["IVString"].ToString(); this.backupDirs = GetSetting("BackupDirs", ','); this.excludeDirs = GetSetting("ExcludeDirs", ','); } private string[] GetSetting(string settingName, char delimiter) { if (System.Configuration.ConfigurationSettings.AppSettings[settingName] != null) { string settingVal = System.Configuration.ConfigurationSettings.AppSettings[settingName].ToString(); if (settingVal.Length > 0) return settingVal.Split(delimiter); } return new string[0]; } public Backup() { this.LoadAppSettings(); if (isBackup) this.DoBackup(); else this.DoRestore(); Log("Finished"); } private void DoRestore() { System.Windows.Forms.OpenFileDialog fileDialog = new System.Windows.Forms.OpenFileDialog(); fileDialog.Title = "Choose .encrypted file"; fileDialog.Filter = "Encrypted files (*.encrypted)|*.encrypted|All files (*.*)|*.*"; fileDialog.InitialDirectory = Backup.baseDir; if (fileDialog.ShowDialog() == System.Windows.Forms.DialogResult.OK) { //string encryptedFile = GetFileName("encrypted"); string encryptedFile = fileDialog.FileName; string decryptedFile = this.GetDecryptedFilename(encryptedFile); //string originalFile = GetFileName("original"); this.Decrypt(encryptedFile, decryptedFile); //this.UnzipFile(decryptedFile, originalFile); } } //use the same filename as the backup except replace ".encrypted" with ".decrypted.zip" private string GetDecryptedFilename(string encryptedFile) { string name = encryptedFile.Substring(0, encryptedFile.LastIndexOf(".")); name += ".decrypted.zip"; return name; } private void DoBackup() { this.backupFile = GetFileName("bak"); this.zipFile = GetFileName("zip"); this.encryptedFile = GetFileName("encrypted"); this.DeleteFiles(); this.zipOutputStream = new ZipOutputStream(File.Create(zipFile)); try { //backup database first if (this.dbName.Length > 0) { this.BackupDB(backupFile); this.ZipFile(backupFile, this.GetName(backupFile)); } //zip any directories specified in config file this.ZipUserSpecifiedFilesAndDirectories(this.backupDirs); } finally { this.zipOutputStream.Finish(); this.zipOutputStream.Close(); } this.Encrypt(zipFile, encryptedFile); this.SCPFile(encryptedFile); this.DeleteFiles(); } /// <summary> /// Deletes any files created by the backup process, namely the DB backup file, /// the zip of all files backuped up, and the encrypred zip file /// </summary> private void DeleteFiles() { File.Delete(this.backupFile); File.Delete(this.zipFile); ///File.Delete(this.encryptedFile); } private void ZipUserSpecifiedFilesAndDirectories(string[] fileNames) { foreach (string fileName in fileNames) { string name = fileName.Trim(); if (name.Length > 0) { Log("Zipping " + name); this.ZipFile(name, this.GetNameFromDir(name)); } } } private void SCPFile(string inputPath) { string sshServer = System.Configuration.ConfigurationSettings.AppSettings["SSHServer"].ToString(); string sshUsername = System.Configuration.ConfigurationSettings.AppSettings["SSHUsername"].ToString(); string sshPassword = System.Configuration.ConfigurationSettings.AppSettings["SSHPassword"].ToString(); if (sshServer.Length > 0 && sshUsername.Length > 0 && sshPassword.Length > 0) { Scp scp = new Scp(sshServer, sshUsername, sshPassword); //Copy a file from local machine to remote SSH server scp.Connect(); Log("Connected to " + sshServer); //scp.Put(inputPath, "/home/wal/temp.txt"); scp.Put(inputPath, GetName(inputPath)); scp.Close(); } else { Log("Not SCP as missing login details"); } } private string GetName(string inputPath) { FileInfo info = new FileInfo(inputPath); return info.Name; } private string GetNameFromDir(string inputPath) { DirectoryInfo info = new DirectoryInfo(inputPath); return info.Name; } private static void Log(string msg) { try { string toLog = DateTime.Now.ToString() + ": " + msg; System.Diagnostics.Debug.WriteLine(toLog); System.IO.FileStream fs = new System.IO.FileStream(baseDir + "app.log", System.IO.FileMode.OpenOrCreate, System.IO.FileAccess.ReadWrite); System.IO.StreamWriter m_streamWriter = new System.IO.StreamWriter(fs); m_streamWriter.BaseStream.Seek(0, System.IO.SeekOrigin.End); m_streamWriter.WriteLine(toLog); m_streamWriter.Flush(); m_streamWriter.Close(); fs.Close(); } catch (Exception e) { Console.WriteLine(e.ToString()); } } private byte[] GetFileBytes(string path) { FileStream stream = new FileStream(path, FileMode.Open); byte[] bytes = new byte[stream.Length]; stream.Read(bytes, 0, bytes.Length); stream.Close(); return bytes; } private void WriteFileBytes(byte[] bytes, string path) { FileStream stream = new FileStream(path, FileMode.Create); stream.Write(bytes, 0, bytes.Length); stream.Close(); } private void UnzipFile(string inputPath, string outputPath) { ZipInputStream zis = new ZipInputStream(File.OpenRead(inputPath)); ZipEntry theEntry = zis.GetNextEntry(); FileStream streamWriter = File.Create(outputPath); int size = 2048; byte[] data = new byte[2048]; while (true) { size = zis.Read(data, 0, data.Length); if (size > 0) { streamWriter.Write(data, 0, size); } else { break; } } streamWriter.Close(); zis.Close(); } private bool ExcludeDir(string dirName) { foreach (string excludeDir in this.excludeDirs) { if (dirName == excludeDir) return true; } return false; } private void ZipFile(string inputPath, string zipName) { FileAttributes fa = File.GetAttributes(inputPath); if ((fa & FileAttributes.Directory) != 0) { string dirName = zipName + "/"; ZipEntry entry1 = new ZipEntry(dirName); this.zipOutputStream.PutNextEntry(entry1); string[] subDirs = Directory.GetDirectories(inputPath); //create directories first foreach (string subDir in subDirs) { DirectoryInfo info = new DirectoryInfo(subDir); string name = info.Name; if (this.ExcludeDir(name)) Log("Excluding " + dirName + name); else this.ZipFile(subDir, dirName + name); } //then store files string[] fileNames = Directory.GetFiles(inputPath); foreach (string fileName in fileNames) { FileInfo info = new FileInfo(fileName); string name = info.Name; this.ZipFile(fileName, dirName + name); } } else { Crc32 crc = new Crc32(); this.zipOutputStream.SetLevel(6); // 0 - store only to 9 - means best compression FileStream fs = null; try { fs = File.OpenRead(inputPath); } catch (IOException ioEx) { Log("WARNING! " + ioEx.Message);//might be in use, skip file in this case } if (fs != null) { byte[] buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); ZipEntry entry = new ZipEntry(zipName); entry.DateTime = DateTime.Now; // set Size and the crc, because the information // about the size and crc should be stored in the header // if it is not set it is automatically written in the footer. // (in this case size == crc == -1 in the header) // Some ZIP programs have problems with zip files that don't store // the size and crc in the header. entry.Size = fs.Length; fs.Close(); crc.Reset(); crc.Update(buffer); entry.Crc = crc.Value; this.zipOutputStream.PutNextEntry(entry); this.zipOutputStream.Write(buffer, 0, buffer.Length); } } } private void Encrypt(string inputPath, string outputPath) { RijndaelManaged rijndaelManaged = new RijndaelManaged(); byte[] encrypted; byte[] toEncrypt; //Create a new key and initialization vector. //myRijndael.GenerateKey(); //myRijndael.GenerateIV(); /*des.GenerateKey(); des.GenerateIV(); string temp1 = Convert.ToBase64String(des.Key); string temp2 = Convert.ToBase64String(des.IV);*/ //Get the key and IV. byte[] key = Convert.FromBase64String(keyString); byte[] IV = Convert.FromBase64String(ivString); //Get an encryptor. ICryptoTransform encryptor = rijndaelManaged.CreateEncryptor(key, IV); //Encrypt the data. MemoryStream msEncrypt = new MemoryStream(); CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write); //Convert the data to a byte array. toEncrypt = this.GetFileBytes(inputPath); //Write all data to the crypto stream and flush it. csEncrypt.Write(toEncrypt, 0, toEncrypt.Length); csEncrypt.FlushFinalBlock(); //Get encrypted array of bytes. encrypted = msEncrypt.ToArray(); WriteFileBytes(encrypted, outputPath); } private void Decrypt(string inputPath, string outputPath) { RijndaelManaged myRijndael = new RijndaelManaged(); //DES des = new DESCryptoServiceProvider(); byte[] key = Convert.FromBase64String(keyString); byte[] IV = Convert.FromBase64String(ivString); byte[] encrypted = this.GetFileBytes(inputPath); byte[] fromEncrypt; //Get a decryptor that uses the same key and IV as the encryptor. ICryptoTransform decryptor = myRijndael.CreateDecryptor(key, IV); //Now decrypt the previously encrypted message using the decryptor // obtained in the above step. MemoryStream msDecrypt = new MemoryStream(encrypted); CryptoStream csDecrypt = new CryptoStream(msDecrypt, decryptor, CryptoStreamMode.Read); fromEncrypt = new byte[encrypted.Length]; //Read the data out of the crypto stream. int bytesRead = csDecrypt.Read(fromEncrypt, 0, fromEncrypt.Length); byte[] readBytes = new byte[bytesRead]; Array.Copy(fromEncrypt, 0, readBytes, 0, bytesRead); this.WriteFileBytes(readBytes, outputPath); } private string GetFileName(string extension) { return baseDir + backupName + "_" + DateTime.Now.ToString("yyyyMMdd") + "." + extension; } private void BackupDB(string backupPath) { string sql = @"DECLARE @Date VARCHAR(300), @Dir VARCHAR(4000) --Get today date SET @Date = CONVERT(VARCHAR, GETDATE(), 112) --Set the directory where the back up file is stored SET @Dir = '"; sql += backupPath; sql += @"' --create a 'device' to write to first EXEC sp_addumpdevice 'disk', 'temp_device', @Dir --now do the backup BACKUP DATABASE " + this.dbName; sql += @" TO temp_device WITH FORMAT --Drop the device EXEC sp_dropdevice 'temp_device' "; //Console.WriteLine("sql="+sql); Backup.Log("Starting backup of " + this.dbName); ExecuteSQL(sql); } /// <summary> /// Executes the specified SQL /// Returns true if no errors were encountered during execution /// </summary> /// <param name="procedureName"></param> private void ExecuteSQL(string sql) { SqlConnection conn = new SqlConnection(this.GetDBConnectString()); try { SqlCommand comm = new SqlCommand(sql, conn); conn.Open(); comm.ExecuteNonQuery(); } finally { conn.Close(); } } private string GetDBConnectString() { StringBuilder builder = new StringBuilder(); builder.Append("Data Source=127.0.0.1; User ID="); builder.Append(this.dbUsername); builder.Append("; Password="); builder.Append(this.dbPassword); builder.Append("; Initial Catalog="); builder.Append(this.dbName); builder.Append(";Connect Timeout=30"); return builder.ToString(); } } } }

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • iTunes 9.0.2 hangs on launch on Mac OS X 10.6.2

    - by dlamblin
    My iTunes 9.0.2 hangs on launch in OS X 10.6.2. This doesn't happen all the time, only if I've been running for a while. Then it will recur until I restart. Similarly Safari 4.0.4 will hang in the flash player plugin when about to play a video. If I restart both these problems go away until later. Based on this crash dump I am suspecting Audio Hijack Pro. I will try to install a newer version of the driver involved, but so far I haven't had much luck. I have uninstalled the Flash Plugin (10.0.r42 and r32) but clearly I want it in the long run. This is iTunes' crash report. Date/Time: 2009-12-14 19:56:02 -0500 OS Version: 10.6.2 (Build 10C540) Architecture: x86_64 Report Version: 6 Command: iTunes Path: /Applications/iTunes.app/Contents/MacOS/iTunes Version: 9.0.2 (9.0.2) Build Version: 2 Project Name: iTunes Source Version: 9022501 Parent: launchd [120] PID: 16878 Event: hang Duration: 3.55s (sampling started after 2 seconds) Steps: 16 (100ms sampling interval) Pageins: 5 Pageouts: 0 Process: iTunes [16878] Path: /Applications/iTunes.app/Contents/MacOS/iTunes UID: 501 Thread 8f96000 User stack: 16 ??? (in iTunes + 6633) [0x29e9] 16 ??? (in iTunes + 6843) [0x2abb] 16 ??? (in iTunes + 11734) [0x3dd6] 16 ??? (in iTunes + 44960) [0xbfa0] 16 ??? (in iTunes + 45327) [0xc10f] 16 ??? (in iTunes + 2295196) [0x23159c] 16 ??? (in iTunes + 103620) [0x1a4c4] 16 ??? (in iTunes + 105607) [0x1ac87] 16 ??? (in iTunes + 106442) [0x1afca] 16 OpenAComponent + 433 (in CarbonCore) [0x972e9dd0] 16 CallComponentOpen + 43 (in CarbonCore) [0x972ebae7] 16 CallComponentDispatch + 29 (in CarbonCore) [0x972ebb06] 16 DefaultOutputAUEntry + 319 (in CoreAudio) [0x70031117] 16 AUGenericOutputEntry + 15273 (in CoreAudio) [0x7000e960] 16 AUGenericOutputEntry + 13096 (in CoreAudio) [0x7000e0df] 16 AUGenericOutputEntry + 9628 (in CoreAudio) [0x7000d353] 16 ??? [0xe0c16d] 16 ??? [0xe0fdf8] 16 ??? [0xe0e1e7] 16 ahs_hermes_CoreAudio_init + 32 (in Instant Hijack Server) [0x13fc7e9] 16 semaphore_wait_signal_trap + 10 (in libSystem.B.dylib) [0x9798e922] Kernel stack: 16 semaphore_wait_continue + 0 [0x22a0a5] Thread 9b9eb7c User stack: 16 thread_start + 34 (in libSystem.B.dylib) [0x979bbe42] 16 _pthread_start + 345 (in libSystem.B.dylib) [0x979bbfbd] 16 ??? (in iTunes + 4011870) [0x3d475e] 16 CFRunLoopRun + 84 (in CoreFoundation) [0x993497a4] 16 CFRunLoopRunSpecific + 452 (in CoreFoundation) [0x99343864] 16 __CFRunLoopRun + 2079 (in CoreFoundation) [0x9934477f] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x9798e8da] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 9bc8b7c User stack: 16 start_wqthread + 30 (in libSystem.B.dylib) [0x979b4336] 16 _pthread_wqthread + 390 (in libSystem.B.dylib) [0x979b44f1] 16 _dispatch_worker_thread2 + 234 (in libSystem.B.dylib) [0x979b4a68] 16 _dispatch_queue_invoke + 163 (in libSystem.B.dylib) [0x979b4cc3] 16 kevent + 10 (in libSystem.B.dylib) [0x979b50ea] Kernel stack: 16 kevent + 97 [0x471745] Binary Images: 0x1000 - 0xbecfea com.apple.iTunes 9.0.2 (9.0.2) <1F665956-0131-39AF-F334-E29E510D42DA> /Applications/iTunes.app/Contents/MacOS/iTunes 0x13f6000 - 0x1402ff7 com.rogueamoeba.audio_hijack_server.hermes 2.2.2 (2.2.2) <9B29AE7F-6951-E63F-616A-482B62179A5C> /usr/local/hermes/modules/Instant Hijack Server.hermesmodule/Contents/MacOS/Instant Hijack Server 0x70000000 - 0x700cbffb com.apple.audio.units.Components 1.6.1 (1.6.1) <600769A2-479A-CA6E-A214-C8766F7CBD0F> /System/Library/Components/CoreAudio.component/Contents/MacOS/CoreAudio 0x97284000 - 0x975a3fe7 com.apple.CoreServices.CarbonCore 861.2 (861.2) <A9077470-3786-09F2-E0C7-F082B7F97838> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x9798e000 - 0x97b32feb libSystem.B.dylib ??? (???) <D45B91B2-2B4C-AAC0-8096-1FC48B7E9672> /usr/lib/libSystem.B.dylib 0x99308000 - 0x9947ffef com.apple.CoreFoundation 6.6.1 (550.13) <AE9FC6F7-F0B2-DE58-759E-7DB89C021A46> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation Process: AirPort Base Station Agent [142] Path: /System/Library/CoreServices/AirPort Base Station Agent.app/Contents/MacOS/AirPort Base Station Agent UID: 501 Thread 8b1d3d4 DispatchQueue 1 User stack: 16 ??? (in AirPort Base Station Agent + 5344) [0x1000014e0] 16 ??? (in AirPort Base Station Agent + 70666) [0x10001140a] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 8b80000 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 6e3c7a8 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 16 workqueue_thread_yielded + 562 [0x4cb6ae] Thread 8b0f3d4 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 select$DARWIN_EXTSN + 10 (in libSystem.B.dylib) [0x7fff878b09e2] Kernel stack: 16 sleep + 52 [0x487f93] Thread 8bcb000 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 ??? (in AirPort Base Station Agent + 71314) [0x100011692] 16 ??? (in AirPort Base Station Agent + 13712) [0x100003590] 16 ??? (in AirPort Base Station Agent + 71484) [0x10001173c] 16 __semwait_signal + 10 (in libSystem.B.dylib) [0x7fff878a79ee] Kernel stack: 16 semaphore_wait_continue + 0 [0x22a0a5] Binary Images: 0x100000000 - 0x100016fff com.apple.AirPortBaseStationAgent 1.5.4 (154.2) <73DF13C1-AF86-EC2C-9056-8D1946E607CF> /System/Library/CoreServices/AirPort Base Station Agent.app/Contents/MacOS/AirPort Base Station Agent 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: AppleSpell [3041] Path: /System/Library/Services/AppleSpell.service/Contents/MacOS/AppleSpell UID: 501 Thread 999a000 DispatchQueue 1 User stack: 16 ??? (in AppleSpell + 5852) [0x1000016dc] 16 ??? (in AppleSpell + 6508) [0x10000196c] 16 -[NSSpellServer run] + 72 (in Foundation) [0x7fff81d3b796] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 8a9e7a8 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Binary Images: 0x100000000 - 0x1000a9fef com.apple.AppleSpell 1.6.1 (61.1) <6DE57CC1-77A0-BC06-45E7-E1EACEBE1A88> /System/Library/Services/AppleSpell.service/Contents/MacOS/AppleSpell 0x7fff81cbc000 - 0x7fff81f3dfe7 com.apple.Foundation 6.6.1 (751.14) <767349DB-C486-70E8-7970-F13DB4CDAF37> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: autofsd [52] Path: /usr/libexec/autofsd UID: 0 Thread 79933d4 DispatchQueue 1 User stack: 16 ??? (in autofsd + 5340) [0x1000014dc] 16 ??? (in autofsd + 6461) [0x10000193d] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 75997a8 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Binary Images: 0x100000000 - 0x100001ff7 autofsd ??? (???) <29276FAC-AEA8-1520-5329-C75F9D453D6C> /usr/libexec/autofsd 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: blued [51] Path: /usr/sbin/blued UID: 0 Thread 7993000 DispatchQueue 1 User stack: 16 ??? (in blued + 5016) [0x100001398] 16 ??? (in blued + 152265) [0x1000252c9] 16 -[NSRunLoop(NSRunLoop) run] + 77 (in Foundation) [0x7fff81d07903] 16 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 270 (in Foundation) [0x7fff81d07a24] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 70db000 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 84d2000 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 select$DARWIN_EXTSN + 10 (in libSystem.B.dylib) [0x7fff878b09e2] Kernel stack: 16 sleep + 52 [0x487f93] Binary Images: 0x100000000 - 0x100044fff blued ??? (???) <ECD752C9-F98E-3052-26BF-DC748281C992> /usr/sbin/blued 0x7fff81cbc000 - 0x7fff81f3dfe7 com.apple.Foundation 6.6.1 (751.14) <767349DB-C486-70E8-7970-F13DB4CDAF37> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: check_afp [84504] Path: /System/Library/Filesystems/AppleShare/check_afp.app/Contents/MacOS/check_afp UID: 0 Thread 1140f000 DispatchQueue 1 User stack: 16 ??? (in check_afp + 5596) [0x1000015dc] 16 ??? (in check_afp + 12976) [0x1000032b0] 16 ??? (in check_afp + 6664) [0x100001a08] 16 ??? (in check_afp + 6520) [0x100001978] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 13ad8b7c DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 13ad6b7c User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 ??? (in check_afp + 13071) [0x10000330f] 16 mach_msg_server_once + 285 (in libSystem.B.dylib) [0x7fff878b2417] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 13ad87a8 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 select$DARWIN_EXTSN + 10 (in libSystem.B.dylib) [0x7fff878b09e2] Kernel stack: 16 sleep + 52 [0x487f93] Binary Images: 0x100000000 - 0x100004ff7 com.apple.check_afp 2.0 (2.0) <EE865A7B-8CDC-7649-58E1-6FE2B43F7A73> /System/Library/Filesystems/AppleShare/check_afp.app/Contents/MacOS/check_afp 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: configd [14] Path: /usr/libexec/configd UID: 0 Thread 704a3d4 DispatchQueue 1 User stack: 16 start + 52 (in configd) [0x100001488] 16 main + 2051 (in configd) [0x100001c9e] 16 server_loop + 72 (in configd) [0x1000024f4] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 6e70000 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 74a7b7c User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 plugin_exec + 1440 (in configd) [0x100003c5b] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 7560000 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 _io_pm_force_active_settings + 2266 (in PowerManagement) [0x10050f968] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 75817a8 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 select$DARWIN_EXTSN + 10 (in libSystem.B.dylib) [0x7fff878b09e2] Kernel stack: 16 sleep + 52 [0x487f93] Thread 8b1db7c User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 16 workqueue_thread_yielded + 562 [0x4cb6ae] Binary Images: 0x100000000 - 0x100026ff7 configd ??? (???) <58C02CBA-5556-4CDC-2763-814C4C7175DE> /usr/libexec/configd 0x10050c000 - 0x10051dfff com.apple.SystemConfiguration.PowerManagement 160.0.0 (160.0.0) <0AC3D2ED-919E-29C7-9EEF-629FBDDA6159> /System/Library/SystemConfiguration/PowerManagement.bundle/Contents/MacOS/PowerManagement 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: coreaudiod [114] Path: /usr/sbin/coreaudiod UID: 202 Thread 83b93d4 DispatchQueue 1 User stack: 16 ??? (in coreaudiod + 3252) [0x100000cb4] 16 ??? (in coreaudiod + 26505) [0x100006789] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 847e3d4 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 854c000 User stack: 3 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 3 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 3 workqueue_thread_yielded + 562 [0x4cb6ae] Binary Images: 0x100000000 - 0x10001ffef coreaudiod ??? (???) <A060D20F-A6A7-A3AE-84EC-11D7D7DDEBC6> /usr/sbin/coreaudiod 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: coreservicesd [66] Path: /System/Library/CoreServices/coreservicesd UID: 0 Thread 7994000 DispatchQueue 1 User stack: 16 ??? (in coreservicesd + 3756) [0x100000eac] 16 _CoreServicesServerMain + 522 (in CarbonCore) [0x7fff8327a972] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 76227a8 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 read + 10 (in libSystem.B.dylib) [0x7fff87877426] Kernel stack: 16 lo64_unix_scall + 77 [0x29e3fd] 16 unix_syscall64 + 617 [0x4ee947] 16 read_nocancel + 158 [0x496add] 16 write + 312 [0x49634d] 16 get_pathbuff + 3054 [0x3023db] 16 tsleep + 105 [0x4881ce] 16 wakeup + 786 [0x487da7] 16 thread_block + 33 [0x226fb5] 16 thread_block_reason + 331 [0x226f27] 16 thread_dispatch + 1950 [0x226c88] 16 machine_switch_context + 753 [0x2a5a37] Thread 7622b7c User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 fmodWatchConsumer + 347 (in CarbonCore) [0x7fff8322f23f] 16 __semwait_signal + 10 (in libSystem.B.dylib) [0x7fff878a79ee] Kernel stack: 16 semaphore_wait_continue + 0 [0x22a0a5] Thread 79913d4 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 84d2b7c User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 16 workqueue_thread_yielded + 562 [0x4cb6ae] Thread 9b643d4 User stack: 15 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 15 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 16 workqueue_thread_yielded + 562 [0x4cb6ae] Binary Images: 0x100000000 - 0x100000fff coreservicesd ??? (???) <D804E55B-4376-998C-AA25-2ADBFDD24414> /System/Library/CoreServices/coreservicesd 0x7fff831cb000 - 0x7fff834fdfef com.apple.CoreServices.CarbonCore 861.2 (861.2) <39F3B259-AC2A-792B-ECFE-4F3E72F2D1A5> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: cron [31] Path: /usr/sbin/cron UID: 0 Thread 75acb7c DispatchQueue 1 User stack: 16 ??? (in cron + 2872) [0x100000b38] 16 ??? (in cron + 3991) [0x100000f97] 16 sleep + 61 (in libSystem.B.dylib) [0x7fff878f5090] 16 __semwait_signal + 10 (in libSystem.B.dylib) [0x7fff878a79ee] Kernel stack: 16 semaphore_wait_continue + 0 [0x22a0a5] Binary Images: 0x100000000 - 0x100006fff cron ??? (???) <3C5DCC7E-B6E8-1318-8E00-AB721270BFD4> /usr/sbin/cron 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: cvmsServ [104] Path: /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/cvmsServ UID: 0 Thread 761f3d4 DispatchQueue 1 User stack: 16 ??? (in cvmsServ + 4100) [0x100001004] 16 ??? (in cvmsServ + 23081) [0x100005a29] 16 mach_msg_server + 597 (in libSystem.B.dylib) [0x7fff878ea1c8] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Binary Images: 0x100000000 - 0x100008fff cvmsServ ??? (???) <6200AD80-4159-5656-8736-B72B7388C461> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/cvmsServ 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: DirectoryService [11] Path: /usr/sbin/DirectoryService UID: 0 Thread 70db7a8 DispatchQueue 1 User stack: 16 start + 52 (in DirectoryService) [0x10000da74] 16 main + 3086 (in DirectoryService) [0x10000e68a] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread <multiple> DispatchQueue 6 User stack: 17 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 17 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 231 (in libSystem.B.dylib) [0x7fff87887279] 16 _dispatch_call_block_and_release + 15 (in libSystem.B.dylib) [0x7fff878a8ce8] 16 syscall + 10 (in libSystem.B.dylib) [0x7fff878a92da] 1 _disp

    Read the article

  • Stream Media from Windows 7 to XP with VLC Media Player

    - by DigitalGeekery
    So you’ve got yourself a new computer with Windows 7 and you’re itching to take advantage of it’s ability to stream media across your home network. But, the rest of the family is still on Windows XP and you’re not quite ready to shell out the cash for the upgrades. Well, today we’ll show you how to easily stream media from Windows 7 to Windows XP with VLC Media Player. On the host computer running Windows 7, you’ll need to have an account set up with both a username and password. A blank password will not work. The media files will need to be located in a shared folder. Note: If the media files are located within the Public directory, or within the profile of the user account you use to log into the Windows 7 computer, they will be shared automatically. Sharing your Media Folders On your Windows 7 computer, right-click on the folder containing the files you’d like to stream and choose Properties.     On the Sharing Tab of the folder properties, click the Share button. Click OK.   Type or select from the drop down the user account you’ll use to log in, or select “Everyone” to share with all users. Then click Add. You may change the permission level, but only Read permission is required to play the media. Repeat this process for any additional folders you wish to share.    The Windows XP Client Computer Now that we’ve shared our media folders from the Windows 7 computer, we’re ready to play our files on the Windows XP computer. Download and install the VLC Media Player. (See link below) Then open VLC. Click on Media from the and select Open File… Browse your network for the shared folder that contains your media.   You’ll be prompted to log in to the host computer. Provide the credentials for a user on the Windows 7 computer. Click OK.   Select your media file and click Open.    Your media playback will begin momentarily.   This is a nice and easy way to stream media across your home network without upgrading multiple computers to Windows 7.  Plus, VLC is certainly no slouch as a Media Player. It’ll play virtually any video or audio file you can throw at it. Have you already upgraded all your home PCs to Windows 7? Check out our previous article on streaming media between Windows 7 computers on your home network. Download VLC Media Player Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesShare Digital Media With Other Computers on a Home Network with Windows 7Enable Media Streaming in Windows Home Server to Windows Media PlayerInstall and Use the VLC Media Player on Ubuntu LinuxInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How to reproject a shapefile from WGS 84 to Spherical/Web Mercator projection.

    - by samkea
    Definitions: You will need to know the meaning of these terms below. I have given a small description to the acronyms but you can google and know more about them. #1:WGS-84- World Geodetic Systems (1984)- is a standard reference coordinate system used for Cartography, Geodesy and Navigation. #2: EPGS-European Petroleum Survey Group-was a scientific organization with ties to the European petroleum industry consisting of specialists working in applied geodesy, surveying, and cartography related to oil exploration. EPSG::4326 is a common coordinate reference system that refers to WGS84 as (latitude, longitude) pair coordinates in degrees with Greenwich as the central meridian. Any degree representation (e.g., decimal or DMSH: degrees minutes seconds hemisphere) may be used. Which degree representation is used must be declared for the user by the supplier of data. So, the Spherical/Web Mercator projection is referred to as EPGS::3785 which is renamed to EPSG:900913 by google for use in googlemaps. The associated CRS(Coordinate Reference System) for this is the "Popular Visualisation CRS / Mercator ". This is the kind of projection that is used by GoogleMaps, BingMaps,OSM,Virtual Earth, Deep Earth excetra...to show interactive maps over the web with thier nearly precise coordinates.  Reprojection: After reading alot about reprojecting my coordinates from the deepearth project on Codeplex, i still could not do it. After some help from a colleague, i got my ball rolling.This is how i did it. #1 You need to download and open your shapefile using Q-GIS; its the one with the biggest number of coordinate reference systems/ projections. #2 Use the plugins menu, and enable ftools and the WFS plugin. #3 Use the Vector menu--> Data Management Tools and choose define current projection. Enable, use predefined reference system and choose WGS 84 coodinate system. I am personally in zone 36, so i chose WGS84-UTM Zone 36N under ( Projected Coordinate Systems--> Universal Transverse Mercator) and click ok. #4 Now use the Vector menu--> Data Management Tools and choose export to new projection. The same dialog will pop-up. Now choose WGS 84 EPGS::4326 under Geodetic Coordinate Systems. My Input user Defined Spatial Reference System should looks like this: +proj=tmerc +lat_0=0 +lon_0=33 +k=0.9996 +x_0=500000 +y_0=200000 +ellps=WGS84 +datum=WGS84 +units=m +no_defs Your Output user Defined Spatial Reference System should look like this: +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs Browse for the place where the shapefile is going to be and give the shapefile a name(like origna_reprojected). If it prompts you to add the projected layer to the TOC, accept. There, you have your re-projected map with latitude and longitude pair of coordinates. #5 Now, this is not the actual Spherical/Web Mercator projection, but dont worry, this is where you have to stop. All the other custom web-mapping portals will pick this projection and transform it into EPGS::3785 or EPSG:900913 but the coordinates will still remain as the LatLon pair of the projected shapefile. If you want to test, a particular know point, Q-GIS has a lot of room for that. Go ahead and test it.

    Read the article

  • Integrate Google Docs with Outlook the Easy Way

    - by Matthew Guay
    Want to use Google Docs and Microsoft office together?  Here’s how you can use Harmony for Google Docs to integrates them seamlessly with Outlook. Harmony for Google Docs is an exciting new plugin for Outlook 2007 (a version for Outlook 2010 is in the works).  It lets you integrate your Google Docs account with Outlook via a sidebar.  From this, you can find any of your Google docs or upload a new document, and then you can open the document to view or edit it in Outlook. Getting Started Download Harmony for Google Docs (link below), and install as normal.  Make sure Outlook is closed before you run the install. Next time you open Outlook, the new Harmony sidebar will automatically open.  Enter your Google Account info, and click Sign In. Now, all of your Google Docs will show up in the sidebar. Double-click any file to open it in Outlook.  You may have to sign-in to Google Docs the first time you open a document. Here’s a Google Doc open in Outlook.  Notice that everything works, including full editing. All Google Docs features worked great in our tests except for the new drawings tool.  When we tried to insert a drawing, Outlook had a script error.  This was the only problem we had with Harmony, and could be due to an interaction between Google Drawings and Outlook’s rendering engine. Harmony makes it easy to find any file in your Google Docs account.  You can search for a file, or sort your files by type, recentness, and more. You can also easily add any document to Google Docs directly from Harmony.  You can drag and drop any document, including one attached to an email, to the Harmony sidebar, and it will directly upload to your Google Docs account. And, when you’re writing a new email or reply, click the Show Documents button to open the Harmony sidebar.  From here, you can add documents as usual and share it with email recipient. Conclusion We previously covered a similar app OffiSync which combines Google doc features with MS Office. However, Harmony makes it much easier to use Google Apps along with Outlook.  This gives you an easy and efficient way to collaborate on documents with coworkers, all without leaving Outlook.  And, if your company uses SharePoint instead of Google Docs, Harmony offers a SharePoint edition that integrates with Outlook just as easily! Link Download Harmony for Google Docs Similar Articles Productive Geek Tips How To Export Documents from Google Docs to Your ComputerView Your Google Calendar in Outlook 2007Sync Your Outlook and Google Calendar with Google Calendar SyncIntegrate Twitter With Microsoft OutlookSlacker Geek: Update Your Facebook Profile from Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams

    Read the article

  • Unable to build my c++ code with g++ 4.6.3

    - by Mriganka
    I am facing multiple issues with building my c++ code on Ubuntu 12.04. This code was building and running fine on RH Enterprise. I am using g++ 4.6.3. Here's the output of g++ -v. g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/i686-linux-gnu/4.6/lto-wrapper Target: i686-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=i686-linux-gnu --host=i686-linux-gnu --target=i686-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Here's a sample of my code: #include "Word.h" #include < string> using namespace std; pthread_mutex_t Word::_lock = PTHREAD_MUTEX_INITIALIZER; Word::Word(): _occurrences(1) { memset(_buf, 0, 25); } Word::Word(char *str): _occurrences(1) { memset(_buf, 0, 25); if (str != NULL) { strncpy(_buf, str, strlen(str)); } } g++ -c -ansi or g++ -c -std=c++98 or g++ -c -std=c++03, none of these options are able to build the code correctly. I get the following compilation errors: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o Word.cpp: In constructor ‘Word::Word()’: Word.cpp:10:21: error: ‘memset’ was not declared in this scope Word.cpp: In constructor ‘Word::Word(char*)’: Word.cpp:16:21: error: ‘memset’ was not declared in this scope Word.cpp:19:34: error: ‘strlen’ was not declared in this scope Word.cpp:19:35: error: ‘strncpy’ was not declared in this scope Word.cpp: In member function ‘void Word::operator=(const Word&)’: Word.cpp:37:42: error: ‘strlen’ was not declared in this scope Word.cpp:37:43: error: ‘strncpy’ was not declared in this scope Word.cpp: In copy constructor ‘Word::Word(const Word&)’: Word.cpp:44:21: error: ‘memset’ was not declared in this scope Word.cpp:45:52: error: ‘strlen’ was not declared in this scope Word.cpp:45:53: error: ‘strncpy’ was not declared in this scope So basically g++ 4.6.3 on Ubuntu 12.04 is not able to recognize the standard c++ headers. And I am not finding a way out of this situation. Second problem: In order to make progress, I included < string.h instead of < string. But now I am facing linking errors with my message queue and pthread library functions. Here's the error that I am getting: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o g++ -lrt -I/usr/lib/i386-linux-gnu Word.o HashMap.o main.o -o word_count main.o: In function `main': /home/mriganka/WordCount/main.cpp:75: undefined reference to `pthread_create' /home/mriganka/WordCount/main.cpp:90: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:93: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:113: undefined reference to `mq_send' /home/mriganka/WordCount/main.cpp:123: undefined reference to `pthread_join' /home/mriganka/WordCount/main.cpp:129: undefined reference to `mq_close' /home/mriganka/WordCount/main.cpp:130: undefined reference to `mq_unlink' main.o: In function `count_words(void*)': /home/mriganka/WordCount/main.cpp:151: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:154: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:162: undefined reference to `mq_timedreceive' collect2: ld returned 1 exit status Here's my makefile: CC=g++ CFLAGS=-c -g -ansi LDFLAGS=-lrt INC=-I/usr/lib/i386-linux-gnu SOURCES=Word.cpp HashMap.cpp main.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=word_count all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(INC) -pthread $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm -f *.o word_count Please help me to resolve both the issues. I searched online relentlessly for any solution of these problems, but no one seems to have encountered these issues.

    Read the article

  • Games to Vista Game explorer with Inno Setup

    - by Kraemer
    Ok, i'm trying to force my inno setup installer to add a shortcut of my game to Vista Games Explorer. Theoretically this should do the trick: [Files] Source: "GameuxInstallHelper.dll"; DestDir: "{app}"; Flags: ignoreversion overwritereadonly; [Registry] Root: HKLM; Subkey: SOFTWARE\dir\dir; Flags: uninsdeletekeyifempty Root: HKLM; Subkey: SOFTWARE\dir\dir; ValueName: Path; ValueType: String; ValueData: {app}; Flags: uninsdeletekey Root: HKLM; Subkey: SOFTWARE\dir\dir; ValueName: AppFile; ValueType: String; ValueData:{app}\executable.exe ; Flags: uninsdeletekey [CustomMessages] en.Local=en en.removemsg=Do you wish to remove game saves and settings? en.taskentry=Play [Code] const PlayTask = 0; AllUsers = 2; Current = 3; type TGUID = record Data1: Cardinal; Data2, Data3: Word; Data4: array [0..7] of char; end; var GUID: TGUID; function GetDC(HWND: DWord): DWord; external '[email protected] stdcall'; function GetDeviceCaps(DC: DWord; Index: Integer): Integer; external '[email protected] stdcall'; function ReleaseDC(HWND: DWord;DC: DWord): Integer; external '[email protected] stdcall'; function ShowWindow(hWnd: DWord; nCmdShow: Integer): boolean; external '[email protected] stdcall'; function SetWindowLong(hWnd: DWord; nIndex: Integer; dwNewLong: Longint):Longint; external '[email protected] stdcall'; function GenerateGUID(var GUID: TGUID): Cardinal; external 'GenerateGUID@files:GameuxInstallHelper.dll stdcall setuponly'; function AddToGameExplorer(Binary: String; Path: String; InstallType: Integer; var GUID: TGUID): Cardinal; external 'AddToGameExplorerW@files:GameuxInstallHelper.dll stdcall setuponly'; function CreateTask(InstallType: Integer; var GUID: TGUID; TaskType: Integer; TaskNumber: Integer; TaskName: String; Binary: String; Parameters: String): Cardinal; external 'CreateTaskW@files:GameuxInstallHelper.dll stdcall setuponly'; function RetrieveGUIDForApplication(Binary: String; var GUID: TGUID): Cardinal; external 'RetrieveGUIDForApplicationW@{app}\GameuxInstallHelper.dll stdcall uninstallonly'; function RemoveFromGameExplorer(var GUID: TGUID): Cardinal; external 'RemoveFromGameExplorer@{app}\GameuxInstallHelper.dll stdcall uninstallonly'; function RemoveTasks(var GUID: TGUID): Cardinal; external 'RemoveTasks@{app}\GameuxInstallHelper.dll stdcall uninstallonly'; function InitializeSetup(): Boolean; var appath: string; ResultCode: Integer; begin if RegKeyExists(HKEY_LOCAL_MACHINE, 'SOFTWARE\dir\dir') then begin RegQueryStringValue(HKEY_LOCAL_MACHINE, 'SOFTWARE\dir\dir', 'Path', appath) Exec((appath +'\unins000.exe'), '', '', SW_SHOW, ewWaitUntilTerminated, ResultCode) end else begin Result := TRUE end; end; procedure CurUninstallStepChanged(CurUninstallStep: TUninstallStep); begin if CurUninstallStep = usUninstall then begin if GetWindowsVersion shr 24 > 5 then begin RetrieveGUIDForApplication(ExpandConstant('{app}\AWL_Release.dll'), GUID); RemoveFromGameExplorer(GUID); RemoveTasks(GUID); UnloadDll(ExpandConstant('{app}\GameuxInstallHelper.dll')); end; end; if CurUninstallStep = usPostUninstall then begin if MsgBox(ExpandConstant('{cm:removemsg}'), mbConfirmation, MB_YESNO)=IDYES then begin DelTree(ExpandConstant('{app}'), True, True, True); end; end; end; procedure CurStepChanged(CurStep: TSetupStep); begin if GetWindowsVersion shr 24 > 5 then begin if CurStep = ssInstall then GenerateGUID(GUID); if CurStep = ssPostInstall then begin AddToGameExplorer(ExpandConstant('{app}\AWL_Release.dll'), ExpandConstant('{app}'), Current, GUID); CreateTask(3, GUID, PlayTask, 0, ExpandConstant('{cm:taskentry}'), ExpandConstant('{app}\executable.exe'), ''); CreateTask(3, GUID, 1, 0, 'Game Website', 'http://www.gamewebsite.com/', ''); end; end; end; The installer works just fine, but it doesn't place a shortcut of my game to Games explorer. Since i believe that the problem is on the binary file i guess for that part i should ask for some help. So, can anyone please give me a hand here?

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • Add the Vista Style Sidebar Back to Windows 7

    - by Mysticgeek
    If you are moving from Vista to Windows 7, you might miss the Sidebar which was introduced in Vista. Today we take a look at a couple options for getting a Sidebar back in Windows 7. Copy Files from Vista Note: In this example we are using 32-bit versions of Vista and Windows 7. Make sure you are logged in with Administrator credentials. If you have a Vista machine running, we can copy the Windows Sidebar files over to the Windows 7 machine. On the Vista machine navigate to C:\Program Files and copy the Windows Sidebar folder and all of its contents over to a flash drive or network location. On the Windows 7 machine go to C:\Program Files and rename the Windows Sidebar folder to something like Windows Sidebar_old. Now copy the Vista Windows Sidebar folder into C:\Program Files… Now you will have both folders…Windows Sidebar and Windows Sidebar_old in your C:\Program Files folder. Right-click on the desktop and select Gadgets. There you are…the Original Vista Sidebar is back and will act as it did in Vista. Move Sidebar Gadgets Another work around if you don’t have a copy of Vista, you can simply move the Desktop Gadgets you want over to the right side of the screen and they will stay there…no dock needed. Type gadgets into the Search box in the Windows Start Menu and click on Desktop Gadgets. Then drag the included Gadgets you want over to the right side of the screen. Or click on the link to Get more gadgets online to find more. Once you have them where you want, each time you reboot they will still be in the same location. This holds true no matter where you place them on your desktop as well. Install Desktop Sidebar If you want an enhanced sidebar that includes a lot of different features, and don’t have a copy of Vista, you might want to check out Desktop Sidebar Beta (link below). This is a freeware application that works with Windows XP, Vista, and Windows 7. After installation you can access it from the Start Menu… Here is how it will look after you launch it… It includes several pre-installed panels including a clock, Media Player, Search Bar, Slideshow, Messenger, Outlook inbox, Tasks, Quick Launch, Performance…and a lot more. It is highly customizable and allows you to change skins, add various levels of transparency, and a lot more. One caveat with going with Desktop Sidebar is we didn’t find a way to add Windows Gadgets to it (though there might be a plugin for it that we’re not aware of). But there are so many options, you may not mind. However, you can still use the desktop gadgets as you normally would in Windows 7. Believe it or not, some people actually prefer the Vista style Sidebar and would like it back in Windows 7. With these options you can get the Vista Sidebar back if you have a copy of Vista, place the Gadgets on the desktop, or go the freeware route. Download Desktop Sidebar (freeware) Similar Articles Productive Geek Tips Disable Windows Sidebar in VistaHow To Repair Your Crashed or Hanging Vista SidebarApplying Themes To Your Windows Vista SidebarDisable Sidebar / Desktop Gadgets on Windows 7Put AOL Instant Messenger (AIM) In your Windows Sidebar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox)

    Read the article

  • Add Social Elements to Your Gmail Contacts with Rapportive

    - by Matthew Guay
    Would you like to discover more about your contacts?  Xobni is a great tool for this in Outlook, and thanks to a small plugin for Gmail, you can get similar functionality right from your favorite webmail app. Setup Rapportive on Your Gmail Browse to the Rapportive site (link below), and click install to add it to your browser.  Rapportive currently only supports Firefox and Google Chrome.  In this test, we installed it on Google Chrome.  Notice that Chrome warns Rapportive may access your private data from Gmail, though Rapportive says that they only use this data securely on your computer or their servers. Next time you log into Gmail, open a message to see the new Rapportive sidebar.  Click Log in to get started. Choose if you want to let Rapportive to access your data. Finally, choose whether to stay logged into Rapportive or to log out when you log out of Gmail.   Using Rapportive Now, when you open an email, you should see more information about your contact on the right side of the message where you usually see Google AdSense ads. You may see an avatar, short bio, and links to their social networks.  You can add notes about a contact also, which lets you use Rapportive as a CRM. You may see more information on some contacts.  Here we see a contact that shows recent Tweets and links to several social networks. Take Rapportive Further You can add more features to Rapportive with Raplets, which are small extensions that add more information or CRM functionality.  To add these, click the Rapportive button on the top of Gmail, and select Add Raplets to Rapportive. Find a Raplet you want, and click Add This. A popup will open to give you more information about the Raplet; click the Add button at the bottom if you still want it. And, if you’re wish to close Rapportive without logging out of Gmail, click the Rapportive link in Gmail and select Log out. Conclusion Whether you want to find out more about your contacts or keep track of notes about them, Rapportive is a great way to do this from Gmail.  With tools like this, Gmail gets a bit more powerful and feels more like a desktop application. If you would like this type of functionality in Outlook, check out our article on how to power up Outlook’s search and contacts with Xobni. Add Rapportive to Gmail Similar Articles Productive Geek Tips How to Import Gmail Contacts Into Outlook 2007Enhance Your Gmail Account in ChromeFigure out which Online accounts are selling your email to spammersAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFix for New Contact Group Button Not Displaying in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >