Search Results

Search found 17982 results on 720 pages for 'computed values'.

Page 375/720 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • Box 2d basic questions

    - by philipp
    I am a bit new to box2d and I am developing an game with type and letters. I am using an svg font and generate the box2d bodies direct from the glyphs path definition, using the convex hull of them. I also have an decomposition routine the decomposes this hull if necessary. All this it is more or less working, except that I got some strange errors which definitely are caused by the scale factors. The problem is caused by two factors: first: the world scale of box2d, second: the the precision of curve-approximation of the glyph vectors. So through scaling down the input vertices for box2d, it happens that they become equal caused by numerical precision, what causes errors in box2d. Through scaling the my glyphs a bit up, this goes away. I also goes away if I chose a different world scale factor, but this slows down the whole animation quite much! So if my view port is about 990px * 600px and i want to animate Glyphs in box2d which should have a size from about 50px * 50px up to 300px * 300px, which scale factor of the b2world should i choose? How small should the smallest distance from on vertex to another be, while approximating the glyph vectors? Thanks for help greetings philipp EDIT:: I continued reading the docs of box2d and after rethinking of the units system, which is designed to handle object from 0.1 up to 10 meters, I calculated a scale factor of 75. So Objects 600px width will are 8 meters wide in box2d and even small objects of about 20px width will become 0.26 meters width in box2d. I will go on trying with this values, but if there is somebody out there who could give me a clever advice i would be happy!

    Read the article

  • Elfsign Object Signing on Solaris

    - by danx
    Elfsign Object Signing on Solaris Don't let this happen to you—use elfsign! Solaris elfsign(1) is a command that signs and verifies ELF format executables. That includes not just executable programs (such as ls or cp), but other ELF format files including libraries (such as libnvpair.so) and kernel modules (such as autofs). Elfsign has been available since Solaris 10 and ELF format files distributed with Solaris, since Solaris 10, are signed by either Sun Microsystems or its successor, Oracle Corporation. When an ELF file is signed, elfsign adds a new section the ELF file, .SUNW_signature, that contains a RSA public key signature and other information about the signer. That is, the algorithm used, algorithm OID, signer CN/OU, and time stamp. The signature section can later be verified by elfsign or other software by matching the signature in the file agains the ELF file contents (excluding the signature). ELF executable files may also be signed by a 3rd-party or by the customer. This is useful for verifying the origin and authenticity of executable files installed on a system. The 3rd-party or customer public key certificate should be installed in /etc/certs/ to allow verification by elfsign. For currently-released versions of Solaris, only cryptographic framework plugin libraries are verified by Solaris. However, all ELF files may be verified by the elfsign command at any time. Elfsign Algorithms Elfsign signatures are created by taking a digest of the ELF section contents, then signing the digest with RSA. To verify, one takes a digest of ELF file and compares with the expected digest that's computed from the signature and RSA public key. Originally elfsign took a MD5 digest of a SHA-1 digest of the ELF file sections, then signed the resulting digest with RSA. In Solaris 11.1 then Solaris 11.1 SRU 7 (5/2013), the elfsign crypto algorithms available have been expanded to keep up with evolving cryptography. The following table shows the available elfsign algorithms: Elfsign Algorithm Solaris Release Comments elfsign sign -F rsa_md5_sha1   S10, S11.0, S11.1 Default for S10. Not recommended* elfsign sign -F rsa_sha1 S11.1 Default for S11.1. Not recommended elfsign sign -F rsa_sha256 S11.1 patch SRU7+   Recommended ___ *Most or all CAs do not accept MD5 CSRs and do not issue MD5 certs due to MD5 hash collision problems. RSA Key Length. I recommend using RSA-2048 key length with elfsign is RSA-2048 as the best balance between a long expected "life time", interoperability, and performance. RSA-2048 keys have an expected lifetime through 2030 (and probably beyond). For details, see Recommendation for Key Management: Part 1: General, NIST Publication SP 800-57 part 1 (rev. 3, 7/2012, PDF), tables 2 and 4 (pp. 64, 67). Step 1: create or obtain a key and cert The first step in using elfsign is to obtain a key and cert from a public Certificate Authority (CA), or create your own self-signed key and cert. I'll briefly explain both methods. Obtaining a Certificate from a CA To obtain a cert from a CA, such as Verisign, Thawte, or Go Daddy (to name a few random examples), you create a private key and a Certificate Signing Request (CSR) file and send it to the CA, following the instructions of the CA on their website. They send back a signed public key certificate. The public key cert, along with the private key you created is used by elfsign to sign an ELF file. The public key cert is distributed with the software and is used by elfsign to verify elfsign signatures in ELF files. You need to request a RSA "Class 3 public key certificate", which is used for servers and software signing. Elfsign uses RSA and we recommend RSA-2048 keys. The private key and CSR can be generated with openssl(1) or pktool(1) on Solaris. Here's a simple example that uses pktool to generate a private RSA_2048 key and a CSR for sending to a CA: $ pktool gencsr keystore=file format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" \ outkey=MYPRIVATEKEY.key $ openssl rsa -noout -text -in MYPRIVATEKEY.key Private-Key: (2048 bit) modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 publicExponent: 65537 (0x10001) privateExponent: 26:14:fc:49:26:bc:a3:14:ee:31:5e:6b:ac:69:83: . . . [omitted for brevity] . . . 81 prime1: 00:f6:b7:52:73:bc:26:57:26:c8:11:eb:6c:dc:cb: . . . [omitted for brevity] . . . bc:91:d0:40:d6:9d:ac:b5:69 prime2: 00:da:df:3f:56:b2:18:46:e1:89:5b:6c:f1:1a:41: . . . [omitted for brevity] . . . f3:b7:48:de:c3:d9:ce:af:af exponent1: 00:b9:a2:00:11:02:ed:9a:3f:9c:e4:16:ce:c7:67: . . . [omitted for brevity] . . . 55:50:25:70:d3:ca:b9:ab:99 exponent2: 00:c8:fc:f5:57:11:98:85:8e:9a:ea:1f:f2:8f:df: . . . [omitted for brevity] . . . 23:57:0e:4d:b2:a0:12:d2:f5 coefficient: 2f:60:21:cd:dc:52:76:67:1a:d8:75:3e:7f:b0:64: . . . [omitted for brevity] . . . 06:94:56:d8:9d:5c:8e:9b $ openssl req -noout -text -in MYCSR.p10 Certificate Request: Data: Version: 2 (0x2) Subject: OU=Canine SW object signing, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 Exponent: 65537 (0x10001) Attributes: Signature Algorithm: sha1WithRSAEncryption b3:e8:30:5b:88:37:68:1c:26:6b:45:af:5e:de:ea:60:87:ea: . . . [omitted for brevity] . . . 06:f9:ed:b4 Secure storage of RSA private key. The private key needs to be protected if the key signing is used for production (as opposed to just testing). That is, protect the key to protect against unauthorized signatures by others. One method is to use a PIN-protected PKCS#11 keystore. The private key you generate should be stored in a secure manner, such as in a PKCS#11 keystore using pktool(1). Otherwise others can sign your signature. Other secure key storage mechanisms include a SCA-6000 crypto card, a USB thumb drive stored in a locked area, a dedicated server with restricted access, Oracle Key Manager (OKM), or some combination of these. I also recommend secure backup of the private key. Here's an example of generating a private key protected in the PKCS#11 keystore, and a CSR. $ pktool setpin # use if PIN not set yet Enter token passphrase: changeme Create new passphrase: Re-enter new passphrase: Passphrase changed. $ pktool gencsr keystore=pkcs11 label=MYPRIVATEKEY \ format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" $ pktool list keystore=pkcs11 Enter PIN for Sun Software PKCS#11 softtoken: Found 1 asymmetric public keys. Key #1 - RSA public key: MYPRIVATEKEY Here's another example that uses openssl instead of pktool to generate a private key and CSR: $ openssl genrsa -out cert.key 2048 $ openssl req -new -key cert.key -out MYCSR.p10 Self-Signed Cert You can use openssl or pktool to create a private key and a self-signed public key certificate. A self-signed cert is useful for development, testing, and internal use. The private key created should be stored in a secure manner, as mentioned above. The following example creates a private key, MYSELFSIGNED.key, and a public key cert, MYSELFSIGNED.pem, using pktool and displays the contents with the openssl command. $ pktool gencert keystore=file format=pem serial=0xD06F00D lifetime=20-year \ keytype=rsa hash=sha256 outcert=MYSELFSIGNED.pem outkey=MYSELFSIGNED.key \ subject="O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com" $ pktool list keystore=file objtype=cert infile=MYSELFSIGNED.pem Found 1 certificates. 1. (X.509 certificate) Filename: MYSELFSIGNED.pem ID: c8:24:59:08:2b:ae:6e:5c:bc:26:bd:ef:0a:9c:54:de:dd:0f:60:46 Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Not Before: Oct 17 23:18:00 2013 GMT Not After: Oct 12 23:18:00 2033 GMT Serial: 0xD06F00D0 Signature Algorithm: sha256WithRSAEncryption $ openssl x509 -noout -text -in MYSELFSIGNED.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3496935632 (0xd06f00d0) Signature Algorithm: sha256WithRSAEncryption Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Validity Not Before: Oct 17 23:18:00 2013 GMT Not After : Oct 12 23:18:00 2033 GMT Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 9e:39:fe:c8:44:5c:87:2c:8f:f4:24:f6:0c:9a:2f:64:84:d1: . . . [omitted for brevity] . . . 5f:78:8e:e8 $ openssl rsa -noout -text -in MYSELFSIGNED.key Private-Key: (2048 bit) modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 publicExponent: 65537 (0x10001) privateExponent: 0a:06:0f:23:e7:1b:88:62:2c:85:d3:2d:c1:e6:6e: . . . [omitted for brevity] . . . 9c:e1:e0:0a:52:77:29:4a:75:aa:02:d8:af:53:24: c1 prime1: 00:ea:12:02:bb:5a:0f:5a:d8:a9:95:b2:ba:30:15: . . . [omitted for brevity] . . . 5b:ca:9c:7c:19:48:77:1e:5d prime2: 00:cd:82:da:84:71:1d:18:52:cb:c6:4d:74:14:be: . . . [omitted for brevity] . . . 5f:db:d5:5e:47:89:a7:ef:e3 exponent1: 32:37:62:f6:a6:bf:9c:91:d6:f0:12:c3:f7:04:e9: . . . [omitted for brevity] . . . 97:3e:33:31:89:66:64:d1 exponent2: 00:88:a2:e8:90:47:f8:75:34:8f:41:50:3b:ce:93: . . . [omitted for brevity] . . . ff:74:d4:be:f3:47:45:bd:cb coefficient: 4d:7c:09:4c:34:73:c4:26:f0:58:f5:e1:45:3c:af: . . . [omitted for brevity] . . . af:01:5f:af:ad:6a:09:bf Step 2: Sign the ELF File object By now you should have your private key, and obtained, by hook or crook, a cert (either from a CA or use one you created (a self-signed cert). The next step is to sign one or more objects with your private key and cert. Here's a simple example that creates an object file, signs, verifies, and lists the contents of the ELF signature. $ echo '#include <stdio.h>\nint main(){printf("Hello\\n");}'>hello.c $ make hello cc -o hello hello.c $ elfsign verify -v -c MYSELFSIGNED.pem -e hello elfsign: no signature found in hello. $ elfsign sign -F rsa_sha256 -v -k MYSELFSIGNED.key -c MYSELFSIGNED.pem -e hello elfsign: hello signed successfully. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. $ elfsign list -f format -e hello rsa_sha256 $ elfsign list -f signer -e hello O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com $ elfsign list -f time -e hello October 17, 2013 04:22:49 PM PDT $ elfsign verify -v -c MYSELFSIGNED.key -e hello elfsign: verification of hello failed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. Signing using the pkcs11 keystore To sign the ELF file using a private key in the secure pkcs11 keystore, replace "-K MYSELFSIGNED.key" in the "elfsign sign" command line with "-T MYPRIVATEKEY", where MYPRIVATKEY is the pkcs11 token label. Step 3: Install the cert and test on another system Just signing the object isn't enough. You need to copy or install the cert and the signed ELF file(s) on another system to test that the signature is OK. Your public key cert should be installed in /etc/certs. Use elfsign verify to verify the signature. Elfsign verify checks each cert in /etc/certs until it finds one that matches the elfsign signature in the file. If one isn't found, the verification fails. Here's an example: $ su Password: # rm /etc/certs/MYSELFSIGNED.key # cp MYSELFSIGNED.pem /etc/certs # exit $ elfsign verify -v hello elfsign: verification of hello passed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:24:20 PM PDT. After testing, package your cert along with your ELF object to allow elfsign verification after your cert and object are installed or copied. Under the Hood: elfsign verification Here's the steps taken to verify a ELF file signed with elfsign. The steps to sign the file are similar except the private key exponent is used instead of the public key exponent and the .SUNW_signature section is written to the ELF file instead of being read from the file. Generate a digest (SHA-256) of the ELF file sections. This digest uses all ELF sections loaded in memory, but excludes the ELF header, the .SUNW_signature section, and the symbol table Extract the RSA signature (RSA-2048) from the .SUNW_signature section Extract the RSA public key modulus and public key exponent (65537) from the public key cert Calculate the expected digest as follows:     signaturepublicKeyExponent % publicKeyModulus Strip the PKCS#1 padding (most significant bytes) from the above. The padding is 0x00, 0x01, 0xff, 0xff, . . ., 0xff, 0x00. If the actual digest == expected digest, the ELF file is verified (OK). Further Information elfsign(1), pktool(1), and openssl(1) man pages. "Signed Solaris 10 Binaries?" blog by Darren Moffat (2005) shows how to use elfsign. "Simple CLI based CA on Solaris" blog by Darren Moffat (2008) shows how to set up a simple CA for use with self-signed certificates. "How to Create a Certificate by Using the pktool gencert Command" System Administration Guide: Security Services (available at docs.oracle.com)

    Read the article

  • SQL SERVER – What is Page Life Expectancy (PLE) Counter

    - by pinaldave
    During performance tuning consultation there are plenty of counters and values, I often come across. Today we will quickly talk about Page Life Expectancy counter, which is commonly known as PLE as well. You can find the value of the PLE by running following query. SELECT [object_name], [counter_name], [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Manager%' AND [counter_name] = 'Page life expectancy' The recommended value of the PLE counter is 300 seconds. I have seen on busy system this value to be as low as even 45 seconds and on unused system as high as 1250 seconds. Page Life Expectancy is number of seconds a page will stay in the buffer pool without references. In simple words, if your page stays longer in the buffer pool (area of the memory cache) your PLE is higher, leading to higher performance as every time request comes there are chances it may find its data in the cache itself instead of going to hard drive to read the data. Now check your system and post back what is this counter value for you during various time of the day. Is this counter any way relates to performance issues for your system? Note: There are various other counters which are important to discuss during the performance tuning and this counter is not everything. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • What's your experience with female programmers?

    - by Rachel
    Let me start by saying I'm female, but every single other female programmer I've known has been pretty terrible. The extent of their knowledge seems to be copy/paste and modify some values. Quite often they don't even try to learn new concepts, or understand what they're doing. I'm not saying good female programmers aren't out there, just that the ratio of good/bad programmers seems much worse then males. Perhaps its because everyone feels they have to give female programmers a chance to prove they are not biased? Or is this just me?? What has your experiences been with them? UPDATE: Just want to say thanks for all the responses. I've learned some interesting things and am happy to know that female programmers have such support :) My experience has been very limited with them but all bad, and I agree that it is probably due to my small sample size (around 5). I wasn't trying to be sexist with such a question, I just wanted to find out if it was really that abnormal to be a female programmer. I'm abnormal about a lot of things you'd expect from a female... I play video games in most of my spare time, I liked Math so much I completed my entire math book during christmas break one year (What can I say, I found the subject interesting), I'm not very social, I dislike shopping, I only have 2 pairs of shoes, my significant other doesn't work but does all the housework/laundry/etc... but anyways, thanks :)

    Read the article

  • It&rsquo;s All About Expectation Management

    - by D'Arcy Lussier
    I saw this tweet from Gerald Weinberg today: I’d expand on this – its not just managers, its our clients as well. With so much focus on “agile” and reducing the amount of wasteful documentation created, those that typically consume traditional deliverables haven’t caught up. For many, there still is a correlation between seeing a mountain of paper, or a 30 page Word document, or a 40 slide PowerPoint, and feeling like some “work” was done. The “Value Driven Development” movement is still in its infancy, even with the adoption and success stories. So, we have two options – we can complain about it, or we can learn how to live with it while continuing to evangelize about the benefits of value over bloat. The reality is that perceived value is still value, so what’s important – especially in a situation as Gerald mentions where management or clients don’t understand the work – is to find out what the manager/client values and deliver to that. That doesn’t mean you don’t discuss it. That doesn’t mean that if you see risks being represented in what a manager/client is asking you don’t question it and provide alternatives. But it does mean that you don’t slam the door on it – you don’t just toss it aside and ignore what their perceived value is. The world isn’t perfect, primarily because its filled with imperfect people. The only way to get better is to engage and not dismiss each other, even if we disagree on value.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • DBA Command line options

    - by Anthony Shorten
    There are a number of database utilities supplied with the installation of the Oracle Utilities Application Framework based products. These are typically run in interactive mode where the utility prompts you for the values and then executes the required functionality. Did you know that the utilities also have command line options that allow you to run the utility in silent mode as well? You can assess the command line options by specifying the -h option on the command line. Here is an example of the oragensec command line options: oragensec -d <Owner,OwnerPswd,DbName> -u <Database Users> -r <ReadRole,UserRole> -l <logfile> -h where: -d <Owner,OwnerPswd,DbName> Database connect information for the target database. e.g. spladm,spladm,DB200ODB. -u <Database Users> A comma-separated list of database users where synonyms need to be created. e.g. spluser, splread -r <ReadRole,UserRole> Optional. Names of database roles with read and read-write privileges. Default roles are SPL_READ, SPL_USER. e.g. spl_read,spl_user -l <logfile> Optional. Name of the log file. -h Help The command line options allow the DBA to automate the exeucution either via a script or some utility can than execute utilities. This optin can apply to the majority of DBA utilities supplied with the product. Take a look at others.

    Read the article

  • How to easily change columns into rows in Excel?

    - by DavidD
    I have a huge Excel table that I need to transform into paragraphs for a Word report, and I can't find an efficient way to do it The source looks like this: And I would ultimately need something like this, i.e. through a pivot table. Note that "Item C", which doesn't have any description values, is skipped: Now, to get there I believe I need to transform my source to this intermediate format, that has one description per line: How do I get from the source to the intermediate format in an efficient way? Or maybe there is an easier way to produce the target format that I don't know of? Any help is welcome!

    Read the article

  • Downloading a file over HTTP the SSIS way

    This post shows you how to download files from a web site whilst really making the most of the SSIS objects that are available. There is no task to do this, so we have to use the Script Task and some simple VB.NET or C# (if you have SQL Server 2008) code. Very often I see suggestions about how to use the .NET class System.Net.WebClient and of course this works, you can code pretty much anything you like in .NET. Here I’d just like to raise the profile of an alternative. This approach uses the HTTP Connection Manager, one of the stock connection managers, so you can use configurations and property expressions in the same way you would for all other connections. Settings like the security details that you would want to make configurable already are, but if you take the .NET route you have to write quite a lot of code to manage those values via package variables. Using the connection manager we get all of that flexibility for free. The screenshot below illustrate some of the options we have. Using the HttpClientConnection class makes for much simpler code as well. I have demonstrated two methods, DownloadFile which just downloads a file to disk, and DownloadData which downloads the file and retains it in memory. In each case we show a message box to note the completion of the download. You can download a sample package below, but first the code: Imports System Imports System.IO Imports System.Text Imports System.Windows.Forms Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() ' Get the unmanaged connection object, from the connection manager called "HTTP Connection Manager" Dim nativeObject As Object = Dts.Connections("HTTP Connection Manager").AcquireConnection(Nothing) ' Create a new HTTP client connection Dim connection As New HttpClientConnection(nativeObject) ' Download the file #1 ' Save the file from the connection manager to the local path specified Dim filename As String = "C:\Temp\Sample.txt" connection.DownloadFile(filename, True) ' Confirm file is there If File.Exists(filename) Then MessageBox.Show(String.Format("File {0} has been downloaded.", filename)) End If ' Download the file #2 ' Read the text file straight into memory Dim buffer As Byte() = connection.DownloadData() Dim data As String = Encoding.ASCII.GetString(buffer) ' Display the file contents MessageBox.Show(data) Dts.TaskResult = Dts.Results.Success End Sub End Class Sample Package HTTPDownload.dtsx (74KB)

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Modify SonicWALL Global VPN Client Encrypted RCF

    - by PaulWaldman
    Presently rolling out VPN to access to a small office. I am using a SonicWALL TZ-170 running SonicOS Enhanced 3.4.1.0-2e. I've created an encrypted RCF file for the clients to import into the SonicWALL Global VPN Client. Is there a way to provide friendly names for the "Connection Name" and "HostName" in the RCF file? If I create an unencrypted RFC file I can easily modify these values. Is there anyway to modify them in an encrypted RCF file? Thanks.

    Read the article

  • Python — Time complexity of built-in functions versus manually-built functions in finite fields

    - by stackuser
    Generally, I'm wondering about the advantages versus disadvantages of using the built-in arithmetic functions versus rolling your own in Python. Specifically, I'm taking in GF(2) finite field polynomials in string format, converting to base 2 values, performing arithmetic, then output back into polynomials as string format. So a small example of this is in multiplication: Rolling my own: def multiply(a,b): bitsa = reversed("{0:b}".format(a)) g = [(b<<i)*int(bit) for i,bit in enumerate(bitsa)] return reduce(lambda x,y: x+y,g) Versus the built-in: def multiply(a,b): # a,b are GF(2) polynomials in binary form .... return a*b #returns product of 2 polynomials in gf2 Currently, operations like multiplicative inverse (with for example 20 bit exponents) take a long time to run in my program as it's using all of Python's built-in mathematical operations like // floor division and % modulus, etc. as opposed to making my own division, remainder, etc. I'm wondering how much of a gain in efficiency and performance I can get by building these manually (as shown above). I realize the gains are dependent on how well the manual versions are built, that's not the question. I'd like to find out 'basically' how much advantage there is over the built-in's. So for instance, if multiplication (as in the example above) is well-suited for base 10 (decimal) arithmetic but has to jump through more hoops to change bases to binary and then even more hoops in operating (so it's lower efficiency), that's what I'm wondering. Like, I'm wondering if it's possible to bring the time down significantly by building them myself in ways that maybe some professionals here have already come across.

    Read the article

  • Time Warp

    - by Jesse
    It’s no secret that daylight savings time can wreak havoc on systems that rely heavily on dates. The system I work on is centered around recording dates and times, so naturally my co-workers and I have seen our fair share of date-related bugs. From time to time, however, we come across something that we haven’t seen before. A few weeks ago the following error message started showing up in our logs: “The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid.” This seemed very cryptic, especially since it was coming from areas of our application that are typically only concerned with capturing date-only (no explicit time component) from the user, like reports that take a “start date” and “end date” parameter. For these types of parameters we just leave off the time component when capturing the date values, so midnight is used as a “placeholder” time. How is midnight an “invalid time”? Globalization Is Hard Over the last couple of years our software has been rolled out to users in several countries outside of the United States, including Brazil. Brazil begins and ends daylight savings time at midnight on pre-determined days of the year. On October 16, 2011 at midnight many areas in Brazil began observing daylight savings time at which time their clocks were set forward one hour. This means that at the instant it became midnight on October 16, it actually became 1:00 AM, so any time between 12:00 AM and 12:59:59 AM never actually happened. Because we store all date values in the database in UTC, always adjust any “local” dates provided by a user to UTC before using them as filters in a query. The error we saw was thrown by .NET when trying to convert the Brazilian local time of 2011-10-16 12:00 AM to UTC since that local time never actually existed. We hadn’t experienced this same issue with any of our US customers because the daylight savings time changes in the US occur at 2:00 AM which doesn’t conflict with our “placeholder” time of midnight. Detecting Invalid Times In .NET you might use code similar to the following for converting a local time to UTC: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); The code above throws the “invalid time” exception referenced above. We could try to detect whether or not the local time is invalid with something like this: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); if (localTimeZone.IsInvalidTime(localDate)) localDate = localDate.AddHours(1); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); This code works in this particular scenario, but it hardly seems robust. It also does nothing to address the issue that can arise when dealing with the ambiguous times that fall around the end of daylight savings. When we roll the clocks back an hour they record the same hour on the same day twice in a row. To continue on with our Brazil example, on February 19, 2012 at 12:00 AM, it will immediately become February 18, 2012 at 11:00 PM all over again. In this scenario, how should we interpret February 18, 2011 11:30 PM? Enter Noda Time I heard about Noda Time, the .NET port of the Java library Joda Time, a little while back and filed it away in the back of my mind under the “sounds-like-it-might-be-useful-someday” category.  Let’s see how we might deal with the issue of invalid and ambiguous local times using Noda Time (note that as of this writing the samples below will only work using the latest code available from the Noda Time repo on Google Code. The NuGet package version 0.1.0 published 2011-08-19 will incorrectly report unambiguous times as being ambiguous) : var localDateTime = new LocalDateTime(2011, 10, 16, 0, 0); const string timeZoneId = "Brazil/East"; var timezone = DateTimeZone.ForId(timeZoneId); var localDateTimeMaping = timezone.MapLocalDateTime(localDateTime); ZonedDateTime unambiguousLocalDateTime; switch (localDateTimeMaping.Type) { case ZoneLocalMapping.ResultType.Unambiguous: unambiguousLocalDateTime = localDateTimeMaping.UnambiguousMapping; break; case ZoneLocalMapping.ResultType.Ambiguous: unambiguousLocalDateTime = localDateTimeMaping.EarlierMapping; break; case ZoneLocalMapping.ResultType.Skipped: unambiguousLocalDateTime = new ZonedDateTime( localDateTimeMaping.ZoneIntervalAfterTransition.Start, timezone); break; default: throw new InvalidOperationException(string.Format("Unexpected mapping result type: {0}", localDateTimeMaping.Type)); } var convertedDateTime = unambiguousLocalDateTime.ToInstant().ToDateTimeUtc(); Let’s break this sample down: I’m using the Noda Time ‘LocalDateTime’ object to represent the local date and time. I’ve provided the year, month, day, hour, and minute (zeros for the hour and minute here represent midnight). You can think of a ‘LocalDateTime’ as an “invalidated” date and time; there is no information available about the time zone that this date and time belong to, so Noda Time can’t make any guarantees about its ambiguity. The ‘timeZoneId’ in this sample is different than the ones above. In order to use the .NET TimeZoneInfo class we need to provide Windows time zone ids. Noda Time expects an Olson (tz / zoneinfo) time zone identifier and does not currently offer any means of mapping the Windows time zones to their Olson counterparts, though project owner Jon Skeet has said that some sort of mapping will be publicly accessible at some point in the future. I’m making use of the Noda Time ‘DateTimeZone.MapLocalDateTime’ method to disambiguate the original local date time value. This method returns an instance of the Noda Time object ‘ZoneLocalMapping’ containing information about the provided local date time maps to the provided time zone.  The disambiguated local date and time value will be stored in the ‘unambiguousLocalDateTime’ variable as an instance of the Noda Time ‘ZonedDateTime’ object. An instance of this object represents a completely unambiguous point in time and is comprised of a local date and time, a time zone, and an offset from UTC. Instances of ZonedDateTime can only be created from within the Noda Time assembly (the constructor is ‘internal’) to ensure to callers that each instance represents an unambiguous point in time. The value of the ‘unambiguousLocalDateTime’ might vary depending upon the ‘ResultType’ returned by the ‘MapLocalDateTime’ method. There are three possible outcomes: If the provided local date time is unambiguous in the provided time zone I can immediately set the ‘unambiguousLocalDateTime’ variable from the ‘Unambiguous Mapping’ property of the mapping returned by the ‘MapLocalDateTime’ method. If the provided local date time is ambiguous in the provided time zone (i.e. it falls in an hour that was repeated when moving clocks backward from Daylight Savings to Standard Time), I can use the ‘EarlierMapping’ property to get the earlier of the two possible local dates to define the unambiguous local date and time that I need. I could have also opted to use the ‘LaterMapping’ property in this case, or even returned an error and asked the user to specify the proper choice. The important thing to note here is that as the programmer I’ve been forced to deal with what appears to be an ambiguous date and time. If the provided local date time represents a skipped time (i.e. it falls in an hour that was skipped when moving clocks forward from Standard Time to Daylight Savings Time),  I have access to the time intervals that fell immediately before and immediately after the point in time that caused my date to be skipped. In this case I have opted to disambiguate my local date and time by moving it forward to the beginning of the interval immediately following the skipped period. Again, I could opt to use the end of the interval immediately preceding the skipped period, or raise an error depending on the needs of the application. The point of this code is to convert a local date and time to a UTC date and time for use in a SQL Server database, so the final ‘convertedDate’  variable (typed as a plain old .NET DateTime) has its value set from a Noda Time ‘Instant’. An 'Instant’ represents a number of ticks since 1970-01-01 at midnight (Unix epoch) and can easily be converted to a .NET DateTime in the UTC time zone using the ‘ToDateTimeUtc()’ method. This sample is admittedly contrived and could certainly use some refactoring, but I think it captures the general approach needed to take a local date and time and convert it to UTC with Noda Time. At first glance it might seem that Noda Time makes this “simple” code more complicated and verbose because it forces you to explicitly deal with the local date disambiguation, but I feel that the length and complexity of the Noda Time sample is proportionate to the complexity of the problem. Using TimeZoneInfo leaves you susceptible to overlooking ambiguous and skipped times that could result in run-time errors or (even worse) run-time data corruption in the form of a local date and time being adjusted to UTC incorrectly. I should point out that this research is my first look at Noda Time and I know that I’ve only scratched the surface of its full capabilities. I also think it’s safe to say that it’s still beta software for the time being so I’m not rushing out to use it production systems just yet, but I will definitely be tinkering with it more and keeping an eye on it as it progresses.

    Read the article

  • Ajax Control Toolkit November 2011 Release

    - by Stephen Walther
    I’m happy to announce the November 2011 Release of the Ajax Control Toolkit. This release introduces a new Balloon Popup control and several enhancements to the existing Tabs control including support for on-demand loading of tab content, support for vertical tabs, and support for keyboard tab navigation. We also fixed the top-voted bugs associated with the Tabs control reported at CodePlex.com. You can download the new release by visiting the CodePlex website: http://AjaxControlToolkit.CodePlex.com Alternatively, the fast and easy way to get the latest release of the Ajax Control Toolkit is to use NuGet. Open your Library Package Manager console in Visual Studio 2010 and type: After you install the Ajax Control Toolkit through NuGet, please do a Rebuild of your project (the menu option Build, Rebuild). After you do a Rebuild, the ajaxToolkit prefix will appear in Intellisense: Using the Balloon Popup Control Why a new Balloon Popup control? The Balloon Popup control is the second most requested new feature for the Ajax Control Toolkit according to CodePlex votes: The Balloon Popup displays a message in a balloon when you shift focus to a control, click a control, or hover over a control. You can use the Balloon Popup, for example, to display instructions for TextBoxes which appear in a form: Here’s the code used to create the Balloon Popup: <ajaxToolkit:ToolkitScriptManager ID="tsm1" runat="server" /> <asp:TextBox ID="txtFirstName" Runat="server" /> <asp:Panel ID="pnlFirstNameHelp" runat="server"> Please enter your first name </asp:Panel> <ajaxToolkit:BalloonPopupExtender TargetControlID="txtFirstName" BalloonPopupControlID="pnlFirstNameHelp" BalloonSize="Small" UseShadow="true" runat="server" /> You also can use the Balloon Popup to explain hard to understand words in a text document: Here’s how you display the Balloon Popup when you hover over the link: The point of the conversation was <asp:HyperLink ID="lnkObfuscate" Text="obfuscated" CssClass="hardWord" runat="server" /> by his incessant coughing. <ajaxToolkit:ToolkitScriptManager ID="tsm1" runat="server" /> <asp:Panel id="pnlObfuscate" Runat="server"> To bewilder or render something obscure </asp:Panel> <ajaxToolkit:BalloonPopupExtender TargetControlID="lnkObfuscate" BalloonPopupControlID="pnlObfuscate" BalloonStyle="Cloud" UseShadow="true" DisplayOnMouseOver="true" Runat="server" />   There are four important properties which you need to know about when using the Balloon Popup control: BalloonSize – The three balloon sizes are Small, Medium, and Large. BalloonStyle -- The two built-in styles are Rectangle and Cloud. UseShadow – When true, a drop shadow appears behind the popup. Position – Can have the values Auto, BottomLeft, BottomRight, TopLeft, TopRight. When set to Auto, which is the default, the Balloon Popup will appear where it has the most screen real estate. The following screenshots illustrates how these settings affect the appearance of the Balloon Popup: Customizing the Balloon Popup You can customize the appearance of the Balloon Popup by creating your own Cascading Style Sheet and Sprite. The Ajax Control Toolkit sample site includes a sample of a custom Oval Balloon Popup style: This custom style was created by using a custom Cascading Style Sheet and image. You point the Balloon Popup at a custom Cascading Style Sheet and Cascading Style Sheet class by using the CustomCssUrl and CustomClassName properties like this: <asp:TextBox ID="txtCustom" autocomplete="off" runat="server" /> <br /> <asp:Panel ID="Panel3" runat="server"> This is a custom BalloonPopupExtender style created with a custom Cascading Style Sheet. </asp:Panel> <ajaxToolkit:BalloonPopupExtender ID="bpe1" TargetControlID="txtCustom" BalloonPopupControlID="Panel3" BalloonStyle="Custom" CustomCssUrl="CustomStyle/BalloonPopupOvalStyle.css" CustomClassName="oval" UseShadow="true" runat="server" />   Learn More about the Balloon Popup To learn more about the Balloon Popup control, visit the sample page for the Balloon Popup at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/BalloonPopup/BalloonPopupExtender.aspx Improvements to the Tabs Control In this release, we introduced several important new features for the existing Tabs control. We also fixed all of the top-voted bugs for the Tabs control. On-Demand Loading of Tab Content Here is the scenario. Imagine that you are using the Tabs control in a Web Forms page. The Tabs control displays two tabs: Customers and Products. When you click the Customers tab then you want to see a list of customers and when you click on the Products tab then you want to see a list of products. In this scenario, you don’t want the list of customers and products to be retrieved from the database when the page is initially opened. The user might never click on the Products tab and all of the work to load the list of products from the database would be wasted. In this scenario, you want the content of a tab panel to be loaded on demand. The products should only be loaded from the database and rendered to the browser when you click the Products tab and not before. The Tabs control in the November 2011 Release of the Ajax Control Toolkit includes a new property named OnDemand. When OnDemand is set to the value True, a tab panel won’t be loaded until you click its associated tab. Here is the code for the aspx page: <ajaxToolkit:ToolkitScriptManager ID="tsm1" runat="server" /> <ajaxToolkit:TabContainer ID="tabs" OnDemand="false" runat="server"> <ajaxToolkit:TabPanel HeaderText="Customers" runat="server"> <ContentTemplate> <h2>Customers</h2> <asp:GridView ID="grdCustomers" DataSourceID="srcCustomers" runat="server" /> <asp:SqlDataSource ID="srcCustomers" SelectCommand="SELECT * FROM Customers" ConnectionString="<%$ ConnectionStrings:StoreDB %>" runat="server" /> </ContentTemplate> </ajaxToolkit:TabPanel> <ajaxToolkit:TabPanel HeaderText="Products" runat="server"> <ContentTemplate> <h2>Products</h2> <asp:GridView ID="grdProducts" DataSourceID="srcProducts" runat="server" /> <asp:SqlDataSource ID="srcProducts" SelectCommand="SELECT * FROM Products" ConnectionString="<%$ ConnectionStrings:StoreDB %>" runat="server" /> </ContentTemplate> </ajaxToolkit:TabPanel> </ajaxToolkit:TabContainer> Notice that the TabContainer includes an OnDemand=”True” property. The Tabs control contains two Tab Panels. The first tab panel uses a DataGrid and SqlDataSource to display a list of customers and the second tab panel uses a DataGrid and SqlDataSource to display a list of products. And here is the code-behind for the page: using System; using System.Diagnostics; using System.Web.UI.WebControls; namespace ACTSamples { public partial class TabsOnDemand : System.Web.UI.Page { protected override void OnInit(EventArgs e) { srcProducts.Selecting += new SqlDataSourceSelectingEventHandler(srcProducts_Selecting); } void srcProducts_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { Debugger.Break(); } } } The code-behind file includes an event handler for the Products SqlDataSource Selecting event. The handler breaks into the debugger by calling the Debugger.Break() method. That way, we can know when the Products SqlDataSource actually retrieves the list of products. When the OnDemand property has the value False then the Selecting event handler is called immediately when the page is first loaded. The contents of all of the tabs are loaded (and the contents of the unselected tabs are hidden) when the page is first loaded. When the OnDemand property has the value True then the Selecting event handler is not called when the page is first loaded. The event handler is not called until you click on the Products tab. If you never click on the Products tab then the list of products is never retrieved from the database. If you want even more control over when the contents of a tab panel gets loaded then you can use the TabPanel OnDemandMode property. This property accepts the following three values: None – Never load the contents of the tab panel again after the page is first loaded. Once – Wait until the tab is selected to load the contents of the tab panel Always – Load the contents of the tab panel each and every time you select the tab. There is a live demonstration of the OnDemandMode property here in the sample page for the Tabs control: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Tabs/Tabs.aspx Displaying Vertical Tabs With the November 2011 Release, the Tabs control now supports vertical tabs. To create vertical tabs, just set the TabContainer UserVerticalStripPlacement property to the value True like this: <ajaxToolkit:TabContainer ID="tabs" OnDemand="false" UseVerticalStripPlacement="true" runat="server"> <ajaxToolkit:TabPanel ID="TabPanel1" HeaderText="First Tab" runat="server"> <ContentTemplate> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </p> </ContentTemplate> </ajaxToolkit:TabPanel> <ajaxToolkit:TabPanel ID="TabPanel2" HeaderText="Second Tab" runat="server"> <ContentTemplate> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </p> </ContentTemplate> </ajaxToolkit:TabPanel> </ajaxToolkit:TabContainer> In addition, you can use the TabStripPlacement property to control whether the tab strip appears at the left or right or top or bottom of the tab panels: Tab Keyboard Navigation Another highly requested feature for the Tabs control is support for keyboard navigation. The Tabs control now supports the arrow keys and the Home and End keys. In order for the arrow keys to work, you must first move focus to the tab control on the page by either clicking on a tab with your mouse or repeatedly hitting the Tab key. You can try out the new keyboard navigation support by trying any of the demos included in the Tabs sample page: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Tabs/Tabs.aspx Summary I hope that you take advantage of the new Balloon Popup control and the new features which we introduced for the Tabs control. We added a lot of new features to the Tabs control in this release including support for on-demand tabs, support for vertical tabs, and support for tab keyboard navigation. I want to thank the developers on the Superexpert team for all of the hard work which they put into this release.

    Read the article

  • Bypass RDP Client Warning

    - by Butcher
    How do I bypass the security warning in the RDP Client everytime you launch it from a RDP shortcut? The message title reads: "The publisher of this remote connection cannot be identified. Do you want to connect anyway?" There's a checkbox that reads: "Don't ask me again for connections to this computer" If we check that, it rights the following registry key: [HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\LocalDevices] "MachineIP or Name"=dword:00000004 I'm trying to bypass this warning by writing this registry values before I run the RDP. The problem is that the dword value varies. I found that in one machine (Win7), it was 4, but in another machine (XP), the value was 72 decimal. Does it vary depending on your OS, or is it by the RDP client version? Other info: Signing all my RDP files is NOT an option. Checking the checkbox is NOT an option as we are trying to automate some stuff with a C# tool. Thank you

    Read the article

  • GLSL compile error when accessing an array with compile-time constant index

    - by Benlitz
    I have this shader that works well on my computer (using an ATI HD 5700). I have a loop iterating between two constant values, which is, afaik, acceptable in a glsl shader. I write stuff in two arrays in this loop. #define NB_POINT_LIGHT 2 ... varying vec3 vVertToLight[NB_POINT_LIGHT]; varying vec3 vVertToLightWS[NB_POINT_LIGHT]; ... void main() { ... for (int i = 0; i < NB_POINT_LIGHT; ++i) { if (bPointLightUse[i]) { vVertToLight[i] = ConvertToTangentSpace(ShPointLightData[i].Position - WorldPos.xyz); vVertToLightWS[i] = ShPointLightData[i].Position - WorldPos.xyz; } } ... } I tried my program on another computer equipped with an nVidia GTX 560 Ti, and it fails to compile my shader. I get the following errors (94 and 95 are the lines of the two affectations) when calling glLinkProgram: Vertex info ----------- 0(94) : error C5025: lvalue in assignment too complex 0(95) : error C5025: lvalue in assignment too complex I think my code is valid, I don't know if this comes from a compiler bug, a conversion of my shader to another format from the compiler (nvidia looks to convert it to CG), or if I just missed something. I already tried to remove the if (bPointLightUse[i]) statement and I still have the same error. However, if I just write this: vVertToLight[0] = ConvertToTangentSpace(ShPointLightData[0].Position - WorldPos.xyz); vVertToLightWS[0] = ShPointLightData[0].Position - WorldPos.xyz; vVertToLight[1] = ConvertToTangentSpace(ShPointLightData[1].Position - WorldPos.xyz); vVertToLightWS[1] = ShPointLightData[1].Position - WorldPos.xyz; Then I don't have the error anymore, but it's really unconvenient so I would prefer to keep something loop-based. Here is the more detailled config that works: Vendor: ATI Technologies Inc. Renderer: ATI Radeon HD 5700 Series Version: 4.1.10750 Compatibility Profile Context Shading Language version: 4.10 And here is the more detailed config that doesn't work (should also be compatibility profile, although not indicated): Vendor: NVIDIA Corporation Renderer: GeForce GTX 560 Ti/PCI/SSE2 Version: 4.1.0 Shading Language version: 4.10 NVIDIA via Cg compiler

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • CodePlex Daily Summary for Thursday, March 04, 2010

    CodePlex Daily Summary for Thursday, March 04, 2010New ProjectsAcPrac: A educational program designed to run on Windows. It is fully customizable. It is developed in C# 2008.argo: Linguistic Tool Camelot: A simple utility for testing cross site data queries in SharePointdelta: Delta is a difference tool for large files. Delta will have several clients, including AJAX and Silverlight. You can view differences in a scroll...EF Dorsal: A dorsal spine for Entity Framework based project. This code generator provides a powerfull Business Layer with almost all high important best p...Excel formatting: Excel formatting projectEyes Recognition: eye recognitionGameOfLife: The Game of Life is a the best example of a cellular automaton. It's developed in C#, SilverLight.GKO Libraries: .NET tool libraries written in C# that we have used in our projectsHello Demo: A simple Hello World application, used to demonstrate access to CodePlex using Team Explorerjog.Portal: jogportal lays the foundation for an extensible portal solution leveraging the latest technology including linq and silver lightKM Brasil: k-meleon, km, gecko, brasil, browser, web, web browser, navegador, navegação, kmeleon, k meleonLost in Translation: Ever find yourself in a foreign country eager (or clueless) to what is written on a shop sign or restaurant menu? With your trusty phone, its came...Machine Learning for .NET: Machine Learning Library for .NET. Initial inclusions will be binary and multi-class classification as well as standard clustering algorithms.Maito: Iron Kingdoms Name GeneratorManPowerEngine: ManPowerEngine is a Silverlight 2D Game Engine. It has an game application framework and supports game object hierarchy management, 2D physics simu...Marvin's Arena: Marvin's Arena is a free and entertaining programming game. The game is designed to easily learn programming in any .NET compatible language. It is...MultiMediaPlayer: 整合我们的内容NCacheD - A Simple Distributed .NET Cache using the TPL and WCF: NCacheD is a simple distributed cache system written in .NET. NCacheD offers functionality similar to that of MemcacheD but scaled back. NCacheD ...NDAP: OPeNDAP is a client/server system for making local scientific data accessible to remote locations without regard to the local or remote storage for...Open Guide CMS: Open Guide makes your traveling through city more adventurous, mode educational and more funny. You can support us by lines of code, by interesti...Rollback - A social backup tool.: Rollback is a simple and intuitive social backup application. You can create multiple backup jobs with a few mouse clicks and even schedule it to ...sELedit: An editor for elements.data file of the popular MMORPG Perfect World.SharePoint Theme Applicator: SharePoint Theme Applicator will give SharePoint administrators the ability to apply a theme accross a whole web application (i.e. apply a theme to...sPATCH: ! sPATCH - Perfect World Client Autopatcher This beta patcher is an alternative to the default perfect world patcher. It offers easy client patc...Wiggle: A-life investigation. Let's make little squirmies!WPFSLBlendModeFx: A blend mode library for WPF & Silverlight.休息提醒: 程序员们,尤其是像我这样对程序痴狂的程序员们。一旦研究起自己感兴趣的程序时,觉得上厕所都是浪费时间。 这个程序是在设定的时间后锁定计算机或关闭显示器,从而从某种程度强迫程序员们去休息New ReleasesAMFParser: 1.1.0: Add handling of DSK object when using BlazeDS Add handling of string references Add handling of DateTime values Correct handling of Double values B...Camelot: Camelot 0.1 Alpha: Early release: 1) Query by content type, optionally list or base template 2) Query by list or base templateDictionary Translator for Umbraco: Dictionary Translator 1.1: This is a minor release that fixes a bug that Thomas Hohler found on the first day of release This package is to be used with the ASP.NET open sou...Dynamic Configuration: Sample Application Release 1: These binaries demonstrate the effect of using DynamicConfigurationManager. The source-code for these binaries is available in the 'Source Code' t...EF Dorsal: EFDorsal v0.3b: Second real version. This version add suport to types and entity sets in tree-view.Enterprise Library Extensions: Release 1.0: This is the initial release of the package. The release will contain one feature only, as being able to deploy the project itself it the milestone ...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.0.4 beta 2 Released: Hi, Today’s release contains fix for the following issues: * In WPF application chart was throwing exception as VisualStateGroup was not foun...GameOfLife: Game of life: First release of the game. PublishedGameStore League Manager: League Manager 1.0 release 2.: Fixes crashing bugs from the first release. To use: 1. Install SQL Server Express 2005 http://www.microsoft.com/Sqlserver/2005/en/us/express-down....Marvin's Arena: Version 0.0.5.0: Code Editor (development of robots without Visual Studio - no debugging) * Rounds * 3D Battle Engine: Skybox * 3D Battle Engine: Robot...Morphfolia - ASP.NET CMS and Framework: Morphfolia v2.4.1.1: Morphfolia v2.4.1.1 - New Release Includes: Better support for browsers other than IE (Chrome, Firefox, Safari - all tested on Windows) Supports ...NCacheD - A Simple Distributed .NET Cache using the TPL and WCF: NCacheD Version 1: Getting Started To get up and running, open two instances of Visual Studio 2010. In one window open the NCacheD client solution and then open the ...Papercut: Papercut 2010-3-3: This release includes a few bug fixes and updates, several of which were contributed patches (thanks!). Feature: Added support for embedded images...PE-file Reader Writer API (PERWAPI): PERWAPI-1.1.4: Perwapi version 1.1.4 is the complete distribution package. It contains Binary files, pdb files and xml files for the PERWAPI and SymbolRW compone...Prolog.NET: Prolog.NET 1.0 Beta 1.2: Installer includes: primary Prolog.NET assembly Prolog.NET Workbench PrologTest console application all required dependencies Beta 1.2 in...Protoforma | Tactica Adversa: Skilful 0.2.4.320: BetaRoTwee: RoTwee 6.1.0.0: 16604 "Post playing tune feature" is added. Using this new feature, you can tweet tune playing in iTunes. 16625 Error processing for network error...sELedit: sELedit v1.0: -SharePoint 2007 Deployment Wizard: Support for SharePoint Server and Foundation 2010: This release encompasses the supported install paths for the default install of SPS 2010 (the 14 hive). All three versions are now supported (60 h...SharePoint Theme Applicator: SharePoint Theme Applicator: SharePoint Theme Applicator was built using C# and WPF, it includes the following features: Provides the total number of site collections in the g...Shinkansen: compress, crunch, combine, and cache JavaScript and CSS: Shinkansen 1.0.0.030310: Build 1.0.0.030310, binaries onlysPATCH: sPatcher v0.8: sPatch - Server Example *Contains a sample Patch that "downgrades" PWI 1.4.2 Client to an 1.3.6 ClientTFS Code Comment Checking Policy (CCCP): CCCP 3.0 for VSTS 2008 SP1: This release includes NRefactory 3.2.0.5571 and is built against VSTS 2008 SP1 (.NET 3.5 is required). New: horizontal scrollbar in listboxes for ...TortoiseHg: Beta for TortoiseHg 1.0 (0.9.31254): Please backup your user Mercurial.ini file and then uninstall any 0.9.X release before installing Use the x86 msi file for 32 bit platforms and th...TwitEclipseAPI: TwitEclipseAPI 0.9: 0.9 Release of TwitEclipseAPI Moved API calls to the api.Twitter.com URL. Moved API calls to the versioning API. Now uses the increased Rate Limit...TwitterVB - A .NET Twitter Library: TwitterVB-2.3: Patch 5151: Added BlockedUsers function to get a page other then the first Patch 5420: The ListMembers function will now return more then just th...休息提醒: 初始版本: 初始版本Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETLiveUpload to FacebookMost Active ProjectsRawrBlogEngine.NETMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesMDT Web FrontEndsvn2tfsDiffPlex - a .NET Diff GeneratorIonics Isapi Rewrite FilterFarseer Physics Engine

    Read the article

  • Are you aware of .NET Reflector Pro?

    I'm sure many of my readers know Reflector, that tool to decompile the assemblies to see what it contains, maybe investigating what Microsoft has done with the base assemblies in .NET or maybe trying to understand 3rd party assemblies (or maybe just trying to recover the lost source code ;-) ) It's invaluable tool to have in your tool box. One nice scenario where it helps a lot is Sharepoint development in case you are in problems with the API. But are you aware that MS gave the product to Red Gate Software (http://www.red-gate.com) which released a Pro version of Reflector (http://www.red-gate.com/products/reflector/index.htm) a couple of months ago? Have a look at the feature set on top of the free version.Full support for .NET 1.0, 1.1, 2.0, 3.0, 3.5, and 4.0Decompile an entire assembly to either C# or VB to view and debug in Visual Studio Step-through debugging of any assembly in Visual Studio (as long as it's not obfuscated): Step into and set breakpoints anywhere in any assemblyWatch variables in the decompiled codeUse Visual Studio's advanced debugging features in decompiled code: Set Next Statement, modify variable values, and dynamic expression evaluation in the immediate window I strongly encourage you to have a look at .NET Reflector in case you haven't done so already. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Problem starting Glassfish on a VPS

    - by Raydon
    I am attempting to install Glassfishv3 on my Ubuntu (8.04) VPS using Java 1.6. I initially tried starting the server using: asadmin start-domain and received the following error message: JVM failed to start: com.sun.enterprise.admin.launcher.GFLauncherException: The server exited prematurely with exit code 1. Before it died, it produced the following output: Error occurred during initialization of VM Could not reserve enough space for object heap Command start-domain failed. I attempted to run it again and received a different message: Waiting for DAS to start Error starting domain: domain1. The server exited prematurely with exit code 1. Before it died, it produced the following output: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. Command start-domain failed. If I run cat /proc/meminfo I get the following (all other values are 0kB): MemTotal: 1310720 kB MemFree: 1150668 kB LowTotal: 1310720 kB LowFree: 1150668 kB I have checked the contents of glassfish/glassfish/domains/domain1/config/domain.xml and the JVM setting is: -Xmx512m Any help on resolving this problem would be appreciated.

    Read the article

  • enable email on Godaddy when using Zerigo on Heroku hosted app

    - by joelmaranhao
    A little recap of what I have done ... and then my questions Q1, Q2 and Q3 1 - I developed a RoR app that I deployed on Heroku. biowatts.heroku.com 2 - I bought a domain name at GoDaddy: biowattsonline.com 3 - I am using Zerigo addon as for the DNS heroku addons:add custom_domains heroku addons:add zerigo_dns:basic 4 - Added my domains in Heroku heroku domains:add biowattsonline.com heroku domains:add www.biowattsonline.com and subdomains heroku domains:add calculator.biowattsonline.com Q1: Where do we configure the forward to http://biowattsonline.com/biogas_calculator ? 5 - Configured GoDaddy adding the Zerigo domains In the Nameservers section a.ns.zerigo.net b.ns.zerigo.net c.ns.zerigo.net d.ns.zerigo.net e.ns.zerigo.net The GoDaddy DNS section is empty: "Not hosted here" Ok this works all fine ... http://biowattsonline.com is correctly found 6 - Subdomain forward I want calculator.biowattsonline.com to be forwarded to biowattsonline.com/biogas_calculator Q2: So I created the forward in GoDaddy ... but is that correct? 7 - GoDaddy email Q3: I have one free email account with go GoDaddy, only now that I am using Zerigo I don't know how configure GoDaddy to make it work again?... because it work with the default values Any ideason Q1, Q2 and Q3? Thanks, Joel

    Read the article

  • A dacpac limitation – Deploy dacpac wizard does not understand SqlCmd variables

    - by jamiet
    Since the release of SQL Server 2012 I have become a big fan of using dacpacs for deploying SQL Server databases (for reasons that I will explain some other day) and I chose to use a dacpac to distribute my recently announced utility sp_ssiscatalog (read: Introducing sp_ssiscatalog (v1.0.0.0)). Unfortunately if you read that blog post you may have taken note of the following: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. I think it is worth calling out this limitation separately in this blog post because its a limitation that all dacpac users need to be aware of. If you try and deploy the dacpac containing sp_ssiscatalog using the wizard in SSMS then this is what you will see: TITLE: Microsoft SQL Server Management Studio ------------------------------ Could not deploy package. (Microsoft.SqlServer.Dac) ------------------------------ ADDITIONAL INFORMATION: Missing values for the following SqlCmd variables:SSISDB. (Microsoft.Data.Tools.Schema.Sql) ------------------------------ BUTTONS: OK ------------------------------ The message is quite correct. The SSDT DB project that I used to build this dacpac *does* have a SqlCmd variable in it called SSISDB: Quite simply, the Dac Deployment wizard in SSMS is not capable of deploying such dacpacs. Your only option for deploying such dacpacs is to use the command-line tool sqlpackage.exe. Generally I use sqlpackage.exe anyway (which is why it has taken me months to encounter the aforementioned problem) and have found it preferable to using a GUI-based wizard. Your mileage may vary. @Jamiet

    Read the article

  • SQL SERVER – Effect of Collation on Resultset – SQL in Sixty Seconds #026 – Video

    - by pinaldave
    Collation is a very important concept but often ignored. I have often seen developers either not understanding this or ignored it – this is plain wrong. In simple word we can say Collation is the language or interpreting done by SQL Server. Well, in today’s SQL in Sixty Seconds we are going to observe how collation affects the resultset. Today’s blog post is inspired from my earlier blog post SQL SERVER – Effect of Case Sensitive Collation on Resultset. I strongly encourage you to read this earlier blog post for sample code as well additional explanation related to the concept shared in today’s SQL in Sixty Seconds. Here is the code used in the video. USE TempDB GO -- Sample Data Building CREATE TABLE ColTable (Col1 VARCHAR(15) COLLATE Latin1_General_CI_AS, Col2 VARCHAR(14) COLLATE Latin1_General_CS_AS) ; INSERT ColTable(Col1, Col2) VALUES ('Apple','Apple'), ('apple','apple'), ('pineapple','pineapple'), ('Pineapple','Pineapple'); GO -- Retrieve Data SELECT * FROM ColTable GO -- Retrieve Data SELECT * FROM ColTable ORDER BY Col1 GO -- Retrieve Data SELECT * FROM ColTable ORDER BY Col2 GO -- Clean up DROP TABLE ColTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Effect of Case Sensitive Collation on Resultset Example of Width Sensitive and Width Insensitive Collation Collation and Collation Sensitivity – Quiz – Puzzle – 6 of 31 Change Collation of Database Column – T-SQL Script Find Collation of Database and Table Column Using T-SQL Default Collation of SQL Server 2008 Cannot resolve collation conflict for equal to operation If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >