Search Results

Search found 27229 results on 1090 pages for 'default aspx'.

Page 990/1090 | < Previous Page | 986 987 988 989 990 991 992 993 994 995 996 997  | Next Page >

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Recovering from backup without original install media

    - by KGendron
    A machine from my old job had a complete hard drive failure. I have backups but I'm running into severe problems restoring from them. The only install media was a secondary restore partition on the system's hard drive. I hate whoever came up with that idea more than i can possibly express with words. I spent several days trying to recover the disk - it is pretty well shot and none of my best tricks could even get it to show up in the bios/ The machine that broke is an hp with xp media center edition on it (I don't know why either). The backups were created using the default windows backup tool - I have .bfk file on an external hardrive that i am trying to restore from. I've replaced the hard drive. My home machine is running windows 7 64bit and i'm trying to use it as a platform to restore to the other disk. I downloaded the window 7 nt-restore utility, however no matter what i do it restores to my C drive rather than the specified drive. Fortunately win7 security settings prevented it from being a complete disaster - but still not a happy thing. I tried firing up the xp virtual machine. I can browse to the backups but it says they are invalid and refuse to let me view/ continue with the restore. I tried installing XP to an extra harddrive on my machine - however it bluescreens on me during the install process and I cry. I tried installing xp pro to the new drive and attempted to restore over it, it of course blackscreened on me as that was a stupid idea. I made two partitions on the new hard drive (Apparently the bios on this accursed piece of junk doesn't allow hd partitions larger than 200G anyways and thus fails 40 minutes into the install with an ever-descriptive "Disk Read Error". Guess how i spent last weekend? My last idea was to install xp pro to the second partition and then use it to restore from backup to the first. After the first restart it gives me the error "Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware". My brain made one of those bad hard drive clicky noises. I've tried several boot disks but they don't seem to work. If anyone has a link to a good one it would be greatly appreciated. Anyone have any more ideas? - I really hate asking on what seems like such a simple issue but i am quite literally at my wit's end. Thanks - and sorry for the really long post.

    Read the article

  • QNAP (469L) with Debian: can't connect to router

    - by agtoever
    I've been running my QNAP 469L with Debian (Wheezy deb7u3) for a few months. Yesterday I upgraded the memory to 4 GB. The system boots fine, but since the upgrade, I'm not able to connect the server to my router (a TP-Link WR941ND). My configuration: The router runs a DHCP server (192.168.67.100 and up), with a preconfigured ip address for the QNAP (192.168.67.10). The router is on 192.168.67.1. As said, Debian is installed on the QNAP (which can be regarded as a normal computer). Networking hardware on the QNAP: Intel PRO/1000 Network Connection using the e1000e kernel module. This is what I have tried so far: Replace the network cable (tried 3 different cables on different router ports). Check for messages from the kernel: dmesg | grep eth. Besides the normal hardware messages I get a ADDRCONF(NETDEV_UP): eth0: link is not ready for each call to ifup. Manually restart the network sudo server networking restart Check sudo ifconfig (eth0 is up, but no ip addresses). Check the /etc/network/interfaces which has (besides the loopback device) an allow-hotplug eth0 and iface eth0 inet dhcp, which is afaik the default Debian configuration. Since the server has two ethernet ports, I checked if I'm using the right port (checked the hardware address that ifconfig reports for eth0 is the same as the hardware address that is in the preconfigured ip address for the server in the router. Do a manual sudo ifdown eth0 && sudo ifup eth0 with no results (but an extra ADDRCONF(NETDEV_UP): eth0: link is not ready in the kernel log) Do a dhcp request dhclient -v eth0: for about a minute requests are send (according to the terminal) and at the end I get a No DHCPOFFERS received. No working leases in persistent database - sleeping.. Check the router system log if DHCP requests are received. I see them for some devices (my Mac, my iPhone) but not from the QNAP. The log entry looks like: DHCPS:Recv REQUEST from 84:85:06:07:75:6A and then a DHCPS:Send ACK to 192.168.67.101. There are no records from the QNAP's hardware address. So the two error messages that I do get are: ADDRCONF(NETDEV_UP): eth0: link is not ready for every ifup and No DHCPOFFERS received. No working leases in persistent database - sleeping. for every DHCP call.

    Read the article

  • Why is Windows 7 not following all routes?

    - by GigabyteProductions
    My computer is connected to my secondary router that's running the 192.168.42.0/24 network and my computer also has a route that directs anything on that network to the router, but for anything on that network other than the router itself, it get's the ICMP response of Reply from 192.168.42.194: Destination host unreachable. (with 192.168.42.194 being my computer). Every other network works, like all of the internet, or addresses on my primary router like 192.168.1.*, just not on the 192.168.42.0/24 network... route print returns: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.42.1 192.168.42.194 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.42.0 255.255.255.0 On-link 192.168.42.194 276 192.168.42.194 255.255.255.255 On-link 192.168.42.194 276 192.168.42.255 255.255.255.255 On-link 192.168.42.194 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.42.194 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.42.194 276 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.42.1 Default =========================================================================== The only time anything is supposed to send an ICMP Host Unreachable response is when there's no route to it, right? So, why is my own computer sending that to ping or tracert when I have the route of 192.168.42.0 with the mask of 255.255.255.0? An IP address of 192.168.42.2 surely fits into that route. If I explicitly add a route for the IP address i am trying to access, it works, like: route add 192.168.42.2 mask 255.255.255.255 192.168.42.1 (the 192.168.42.1 right after mask is gateway, or the device to send the packet to so it can route it further), but why wont it work for the implicit route that's automatically on the table? I disabled my firewall, too (I use Comodo if anyone thinks this still serves as a problem). I'v even tried explicitly adding the gateway of 192.168.42.1 to the 192.168.42.0/24 route instead of it routing through 0.0.0.0's gateway, which is what On-link does. but that didn't work either, so it's not a gateway specification problem. If the host was really unreachable, it would be the router's IP address (192.168.42.1) sending that to me... This network is all of my creation, so there's no problem such as an administrator locking me out, because i am the administrator.

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • How to make Nginx fire 504 immediately is server is not available?

    - by Georgiy Ivankin
    I have Nginx set up as a load balancer with cookie-based stickiness. The logic is: If the cookie is NOT there, use round-robbing to choose a server from cluster. If the cookie is there, go to the server that is associated with the cookie value. Server is then responsible for setting the cookie. What I want to add is this: If the cookie is there, but server is down, fallback to round-robbing step to choose next available server. So actually I have load balancing and want to add failover support on top of it. I have managed to do that with the help of error_page directive, but it doesn't work as I expected it to. The problem: 504 (and the fallback associated with it) fires only after 30s timeout even if the server is not physically available. So what I want Nginx to do is fire a 504 (or any other error, doesn't matter) immediately (I suppose this means: when TCP connection fails). This is the behavior we can see in browsers: if we go directly to server when it is down, browser immediately tells us that it can't connect. Moreover, Nginx seems to be doing this for 502 error: if I intentionally misconfigure my servers, Nginx fires 502 immediately. Configuration (stripped down to basics): http { upstream my_cluster { server 192.168.73.210:1337; server 192.168.73.210:1338; } map $cookie_myCookie $http_sticky_backend { default 0; value1 192.168.73.210:1337; value2 192.168.73.210:1338; } server { listen 8080; location @fallback { proxy_pass http://my_cluster; } location / { error_page 504 = @fallback; # Create a map of choices # see https://gist.github.com/jrom/1760790 set $test HTTP; if ($http_sticky_backend) { set $test "${test}-STICKY"; } if ($test = HTTP-STICKY) { proxy_pass http://$http_sticky_backend$uri?$args; break; } if ($test = HTTP) { proxy_pass http://my_cluster; break; } return 500 "Misconfiguration"; } } } Disclaimer: I am pretty far from systems administration of any kind, so there may be some basics that I miss here. EDIT: I'm interested in solution with standard free version of Nginx, not Nginx Plus. Thanks.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Can't install .NET framework 4.0 on Windows XP professional version 2002 SP3 (OS bug?)

    - by that guy
    .NET framework 4.0 install fails on Windows XP professional version 2002 SP3: I tried to run setup using "run as..." to make sure the admin rights are used ("protect my computer..." tick was deselected of course). I tried everything: installing using online/offline setup, windows update. install goes a little and then "rolls back" and says: Installation did not succeed .NET Framework 4 has not been installed because: Fatal error during installation. for more information about this problem, see the log file. the full log: http://pastebay.net/1433771 Any ideas? EDIT1: I have found this in the log: "BlockIf: You must install the 32-bit Windows Imaging Component (WIC) before you run Setup. Please visit the Microsoft Download Center to install WIC, and then rerun Setup...." So I found it, and launched "wic_x86_enu.exe" - but it said: WIC Setup error Newer version of update is already on the system. I have already installed: .NET framewrok 2.0 SP2 .NET framewrok 3.0 SP2 .NET framewrok 3.5 SP1 but I need 4.0 . EDIT2: another attempt and it's log. (this time better copy of log file): http://pastebin.com/gmGfbM9a (copy to notepad and save as .htm and open with internet browser). I have tried all the solutions I could find - and nothing helped. I have found something weird: when I formatted the hard drive and installed windows xp again - the .NET framework 4.0 installed ok, but when I plugged my 100Mbit internet cable - the operating system kind off "locked itself" and the bug returned - I could no longer install .NET framework 4.0 again. There was no reason for that to happen, for example I have windows server 2003 in local network, but I don't have active directory enabled on it or anything like that - the server just has some folders shared and thats all (all server's "features" are default). I had the second pc with the same problem - with XP on it too. This seems like the bug of Operating System to me. I couldn't find what was causing the problem. After many days I gave up: backuped everything, formatted HDD and installed Windows 7 professional 64bit. .NET framework 4.0 installed with no problem on it.

    Read the article

  • Why configuring manual IP do not work for me in DHCP?

    - by user58859
    I have broadband connection in my laptop. It's getting the IP by protocol. configuration is : ip : 192.168.1.2 subnet : 255.255.255.0 gateway : 192.168.1.1 Now I am curious, In IPV4 properties when instead of choosing "Obtain an IP address automatically", I choose "Use the following IP address" and configure everything same, why it do not work? Do DHCP do not work when we configure the IP manually? (operating system : windows-7) EDIT : After configuring the ip manually, when I used ipconfig/all , it's showing dhcp enabled : NO. I am not doing it. Why it got disabled automatically? and how to enable it? DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.2(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled

    Read the article

  • Convert apache rewrite rules to nginx

    - by Shiyu Sekam
    I want to migrate an Apache setup to Nginx, but I can't get the rewrite rules working in Nginx. I had a look on the official nginx documentation, but still some trouble converting it. http://nginx.org/en/docs/http/converting_rewrite_rules.html I've used http://winginx.com/en/htaccess to convert my rules, but this just works partly. The / part looks okay, the /library part as well, but the /public part doesn't work at all. Apache part: ServerAdmin webmaster@localhost DocumentRoot /srv/www/Web Order allow,deny Allow from all RewriteEngine On RewriteRule ^$ public/ [L] RewriteRule (.*) public/$1 [L] Order Deny,Allow Deny from all RewriteEngine On RewriteCond %{QUERY_STRING} ^pid=([0-9]*)$ RewriteRule ^places(.*)$ index.php?url=places/view/%1 [PT,L] # Extract search query in /search?q={query}&l={location} RewriteCond %{QUERY_STRING} ^q=(.*)&l=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1/%2 [PT,L] # Extract search query in /search?q={query} RewriteCond %{QUERY_STRING} ^q=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1 [PT,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule ^(.*)$ index.php?url=$1 [PT,L] Order deny,allow deny from all ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /var/run/php5-fpm.sock -pass-header Authorization CustomLog ${APACHE_LOG_DIR}/access.log combined Nginx config: server { #listen 80; ## listen for ipv4; this line is default and implied root /srv/www/Web; index index.html index.php; server_name localhost; location / { rewrite ^/$ /public/ break; rewrite ^(.*)$ /public/$1 break; } location /library { deny all; } location /public { if ($query_string ~ "^pid=([0-9]*)$"){ rewrite ^/places(.*)$ /index.php?url=places/view/%1 break; } if ($query_string ~ "^q=(.*)&l=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1/%2 break; } if ($query_string ~ "^q=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1 break; } if (!-e $request_filename){ rewrite ^(.*)$ /index.php?url=$1 break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } I haven't written the original ruleset, so I've a hard time converting it. Would you mind giving me a hint how to do it easily or can you help me to convert it, please? I really want to switch over to php5-fpm and nginx :) Thanks

    Read the article

  • Nginx SSL redirect for one specific page only

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance! EDIT: Clearing the cache / cookies seemed to work. Is this the right way to do http/https redirection? I sort of made it up as I went along.

    Read the article

  • Setting Up My Server to Do DNS On OpenSuse 11.3

    - by adaykin
    Hello, I am attempting to use my server to be a DNS server. I am having trouble getting the domain setup. Here is what I have so far: /var/lib/named/master/andydaykin.com: $TTL 2d @ IN SOA andydaykin.com. root.andydaykin.com. ( 2011011000 ; serial 0 ; refresh 0 ; retry 0 ; expiry 0 ) ; minimum andydaykin.com. IN NS ns1.andydaykin.com. andydaykin.com. IN SOA ns1.andydaykin.com. hostmaster.andydaykin.com. ( @.andydaykin.com. IN NS ns1.andydaykin.com. ns1.andydaykin.com. IN A 204.12.227.33 www.andydaykin.com. IN A 204.12.227.33 /etc/resolve.conf: search andydaykin.com nameserver 204.12.227.33 /etc/named.conf: options { # The directory statement defines the name server's working directory directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; statistics-file "/var/log/named.stats"; listen-on port 53 { 127.0.0.1; }; listen-on-v6 { any; }; notify no; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; include "/etc/named.d/forwarders.conf"; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; Include the meta include file generated by createNamedConfInclude. This includes all files as configured in NAMED_CONF_INCLUDE_FILES from /etc/sysconfig/named include "/etc/named.conf.include"; zone "andydaykin.com" in { file "master/andydaykin.com"; type master; allow-transfer { any; }; }; logging { category default { log_syslog; }; channel log_syslog { syslog; }; }; What I am doing wrong?

    Read the article

  • How do I configure Reverse Group Membership Maintenance on an openldap server? (memberOf)

    - by emills
    I am currently working on integrating LDAP authentication into a system and I would like to restrict access based on LDAP group. The only way to do this is via a search filter and therefore I believe my only option to be the use of the "memberOf" attribute in my search filter. It is my understanding that the "memberOf" attribute is an operational attribute which can be created by the server for me anytime a new "member" attribute is created for any "groupOfNames" entry on the server. My main goal is to be able to add a "member" attribute to an existing "groupOfNames" entry and have a matching "memberOf" attribute be added to the DN I provide. What I have managed to achieve so far: I'm still pretty new to LDAP administration but based on what I found in the openldap admin's guide, it looks like Reverse Group Membership Maintence aka "memberof overlay" would achieve exactly the effect I am looking for. My server is currently running a package installation (slapd on ubuntu) of openldap 2.4.15 which uses "cn=config" style runtime configuration. Most of the examples I have found still reference the older "slapd.conf" method of static configuration and I have tried my best to adapt the configurations to the new directory based model. I have added the following entries to enable the memberof overlay module: Enable the module with olcModuleLoad cn=config/cn\=module\{0\}.ldif dn: cn=module{0} objectClass: olcModuleList cn: module{0} olcModulePath: /usr/lib/ldap olcModuleLoad: {0}back_hdb olcModuleLoad: {1}memberof.la structuralObjectClass: olcModuleList entryUUID: a410ce98-3fdf-102e-82cf-59ccb6b4d60d creatorsName: cn=config createTimestamp: 20090927183056Z entryCSN: 20091009174548.503911Z#000000#000#000000 modifiersName: cn=admin,cn=config modifyTimestamp: 20091009174548Z Enabled the overlay for the database and allowed it to use it's default settings (groupOfNames,member,memberOf,etc) cn=config/olcDatabase={1}hdb/olcOverlay\=\{0\}memberof dn: olcOverlay={0}memberof objectClass: olcMemberOf objectClass: olcOverlayConfig objectClass: olcConfig objectClass: top olcOverlay: {0}memberof structuralObjectClass: olcMemberOf entryUUID: 6d599084-490c-102e-80f6-f1a5d50be388 creatorsName: cn=admin,cn=config createTimestamp: 20091009104412Z olcMemberOfRefInt: TRUE entryCSN: 20091009173500.139380Z#000000#000#000000 modifiersName: cn=admin,cn=config modifyTimestamp: 20091009173500Z My current result: By using the above configuration, I am able to add a NEW "groupOfNames" with any number of "member" entries and have all the involved DNs updated with a "memberOf" attribute. This is part of the behavior I would expect. While I believe the following should have been accomplished with the memberof overlay, I still do not know how to do the following and I would gladly welcome any advice: Add a "member" attribute to an EXISTING "groupOfNames" and have a corresponding "memberOf" attribute be created automatically. Remove a "member" attribute and have the corresponding "memberOf" attribute" be removed automatically.

    Read the article

  • My new hard drive doesn't have rights on my old one?

    - by Allan
    Until recently I had a 1 TB hard disk with Windows 7 on it, I have bought myself a SSD, removed the old harddisc and installed Windows 7 on my new one. After that I put back the old hard disk, and formatted it, now I could use that as backup and to keep files on. Nice, right? Well I was updating .Net framework through Windows update, when it stalled. I noticed some space was used on one of the drives on my secondary 'previously primary' hard disk. Apparently it was the .Net framework, trying to save some temporary files on my secondary disc, because it was the one with the most space. It was like it didn't get access. I cancled the installation and rebooted the computer. Now wanting to remove the temporary folder on my secondary harddisk. It told me. "You don't have access by SYSTEM", I don't understand, my user is administrator, its the only user there is and at the same time I can remove and delete any other folder on that drive. I'm gonna go a little pseudo here. But it feels as if the computer treats the old harddisk as protected from tampering by the new SSD. Also, I feel I should mention, they are both listed as primary, ... primary 0 and primary 1. Both using SATA cable. My old hard drive was partioned into 3 drives. 2 of them said the current owner was 'Administrator/myPCname' and the third one said the current owner was 'SYSTEM' I changed them all into the only one that I could pick from the list, which is my user since the 'Administrators/myPCname' wasn't exactly wrong.. could it be that they were somehow still attached to the old OS?.. the fact is I named my computer the exact same thing as it was called before installing a new windows.. so I can't really tell if its an old ownership or not. Also.. I'm currently logged in as 'myname' and I'm administrator.. now trying to delete the previously mentioned files.. it says 'you need access from 'myname' – and it can't delete.. That seems really messed up, I mean I'm logged in as the name it wants me to use. Is there maybe someway I could reset all the users on my computer? Or create some default? I don't know – I just want it to take a form I have always known, from a standard Windows point of view.

    Read the article

  • Connection closed by remote host followed by Connection refused

    - by Khosrow
    All of a sudden my ssh connection to server has been damaged. Here is what's happened: $ ssh -vvv -p <PORT> -l <USER> <HOST> OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/khosrow/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <HOST> [<IP>] port <PORT>. debug1: Connection established. debug1: identity file /home/khosrow/.ssh/identity type -1 debug1: identity file /home/khosrow/.ssh/id_rsa type -1 debug1: identity file /home/khosrow/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host I've recently updated the box with yum update and sshd got updated as well. I honestly don't know if this caused any damages or not. But it's prompted that /etc/ssh/sshd_config was stored as /etc/ssh/sshd_config.rpmnew which was quite normal. I've seen similar posts while googling, but almost all of them suggests that I should check /etc/hosts.allow and /etc/hosts.deny, which in my case, I can't. I can not connect to the box to see what's going on there. I rebooted the box, through web interface of server provider, and it even got worse. I'm now getting this: $ ssh -vvv -p <PORT> -l <USER> <HOST> OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/khosrow/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <HOST> [<IP>] <PORT>. debug1: connect to address <IP> port <PORT>: Connection refused ssh: connect to host <HOST> port <PORT>: Connection refused with both <CUSTOM_PORT> and default 22 ports. I would really appreciate if anyone could help me on this.

    Read the article

  • Wifi network stopped being visible (and usable) (Linksys wag320n)

    - by s427
    Basically, my wifi network simply stopped working for no apparent reason. It doesn't appear in the list of the available networks anymore. I can see all my neighbors' networks, but not mine. It's as if it doesn't exist anymore. The internet connection (non-wifi), which goes through the same modem/router, is fine though. I already had a similar problem about one year ago (see here: Wifi network SSID not visible ), just after buying this very modem. I finally got it to work after performing two factory resets and getting rid of the Cisco "Magic" software; but this time it's not working. I use a linksys router-modem (WAG320N) which is directly connected (via network cable) to my desktop computer (Windows 7). I have (mainly) two devices that use the wifi network: my phone (Samsung Galaxy Nexus) and an Asus tablet (TF201, aka Transformer Prime). I also resurrected an old laptop computer (Dell, running Windows XP) to test that, and it doesn't see anything either (apart from the 20 other wifi networks, of course ^^). This wifi network was working just fine and has been for about a year. I haven't touched the modem settings so I have no idea what's causing the problem. I tried: making my phone "forget" about my network, hoping it would see it again after that: no luck. re-entering the network informations (SSID/password) manually on my phone: still no luck (says it's not in range) exporting the modem configuration, resetting the modem (factory reset, via modem admin), restarting it, importing the configuration: nope. factory reset, turning it off for 15 minutes, restarting, re-factory reset, and entering the configuration manually: still nothing. Has anybody experienced something similar before? Have you any suggestion to fix that? Thanks in advance. PS: to clear things up, here are the settings of my modem regarding wifi: Basic wireless settings: Configuration: manual Radio Band: 2.4GHz Wireless Network Mode: B/G/N-Mixed SSID: s427 Channel Bandwidth: Wide - 40 MHz Channel Wide Channel: 9 - 2.452GHz Standard Channel: 11 - 2.462GHz SSID Broadcast: Enable Advanced Wireless Settings AP Isolation: Disable Authentication Type: Auto Basic Rate: Default Transmission Rate: Auto N Transmission Rate: Auto CTS Protection Mode: Disable Beacon Interval: 100 DTIM Interval: 1 Fragmentation Threshold: 2346 RTS Threshold: 2346

    Read the article

  • Glassfish JSF/EAR Apache 2.2 proxy_ajp_mod Referred Content Missing (images/links/etc)

    - by BillR
    Full disclosure: Since this seems to be more of a configuration issue, I deleted this from Stack (where it wasn't getting any response) and reposted here. The problem is how to change the requestContextPath served up by Glassfish behind mod_proxy_ajp. The site/app runs fine if connecting directly to Glassfish port 8080 which is ultimately not what I want to do. So I need help with configuration for my servers and jsf deployment. I can see the issue but don't know how to resolve it. It has to do with the requestContextPath. Simply put, Apache directs to http://mysite.com/welcome.xhtml which is correct and what I want, but the page is minus the images and styles. The issue is Glassfish itself is still pointing to http://mysite.com/myapp/*. So all links it serves in the app/site still refer via the requestContextPath. That is the /myapp/* part of http://mysite.com/myapp/welcome.xhtml. When I look in the page source, images which are referred to with relative links still point to the requestContextPath (that is, /myapp/). This is fixable but a real pain. However with page links I can't set the relative path. If I hover over the contact page link I see http://mysite.com/myapp/contact.xhtml, and if I click it, I get 404. You can see the /myapp/ context path in the page source as well. If I type in the URL http://mysite.com/contact.xhtml I get the page minus its referred links (requestContextPath). On Apache ProxyPass / ajp://littlewalterserver:8009/myapp-web/ ProxyPassReverse / ajp://littlewalterserver:8009/myapp_Project-web On Glassfish asadmin create-network-listener --listenerport 8009 --protocol http-listener-1 --jkenabled true jk-connector I have tried going in to Glassfish and setting the web app as the default web app. I have changed the / in glassfish-web.xml (and checked to make sure it was the same in the EAR file). How can I get Glassfish to not include the /myapp/ context in the URLs? This has to be easy if you know how, but I don't know how, can someone help out here? Thanks.

    Read the article

  • Ajax Control Toolkit December 2013 Release

    - by Stephen.Walther
    Today, we released a new version of the Ajax Control Toolkit that contains several important bug fixes and new features. The new release contains a new Tabs control that has been entirely rewritten in jQuery. You can download the December 2013 release of the Ajax Control Toolkit at http://Ajax.CodePlex.com. Alternatively, you can install the latest version directly from NuGet: The Ajax Control Toolkit and jQuery The Ajax Control Toolkit now contains two controls written with jQuery: the ToggleButton control and the Tabs control.  The goal is to rewrite the Ajax Control Toolkit to use jQuery instead of the Microsoft Ajax Library gradually over time. The motivation for rewriting the controls in the Ajax Control Toolkit to use jQuery is to modernize the toolkit. We want to continue to accept new controls written for the Ajax Control Toolkit contributed by the community. The community wants to use jQuery. We want to make it easy for the community to submit bug fixes. The community understands jQuery. Using the Ajax Control Toolkit with a Website that Already uses jQuery But what if you are already using jQuery in your website?  Will adding the Ajax Control Toolkit to your website break your existing website?  No, and here is why. The Ajax Control Toolkit uses jQuery.noConflict() to avoid conflicting with an existing version of jQuery in a page.  The version of jQuery that the Ajax Control Toolkit uses is represented by a variable named actJQuery.  You can use actJQuery side-by-side with an existing version of jQuery in a page without conflict.Imagine, for example, that you add jQuery to an ASP.NET page using a <script> tag like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="TestACTDec2013.WebForm1" %> <!DOCTYPE html> <html > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <script src="Scripts/jquery-2.0.3.min.js"></script> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:TabContainer runat="server"> <ajaxToolkit:TabPanel runat="server"> <HeaderTemplate> Tab 1 </HeaderTemplate> <ContentTemplate> <h1>First Tab</h1> </ContentTemplate> </ajaxToolkit:TabPanel> <ajaxToolkit:TabPanel runat="server"> <HeaderTemplate> Tab 2 </HeaderTemplate> <ContentTemplate> <h1>Second Tab</h1> </ContentTemplate> </ajaxToolkit:TabPanel> </ajaxToolkit:TabContainer> </div> </form> </body> </html> The page above uses the Ajax Control Toolkit Tabs control (TabContainer and TabPanel controls).  The Tabs control uses the version of jQuery that is currently bundled with the Ajax Control Toolkit (jQuery version 1.9.1). The page above also includes a <script> tag that references jQuery version 2.0.3.  You might need that particular version of jQuery, for example, to use a particular jQuery plugin. The two versions of jQuery in the page do not create a conflict. This fact can be demonstrated by entering the following two commands in the JavaScript console window: actJQuery.fn.jquery $.fn.jquery Typing actJQuery.fn.jquery will display the version of jQuery used by the Ajax Control Toolkit and typing $.fn.jquery (or jQuery.fn.jquery) will show the version of jQuery used by other jQuery plugins in the page.      Preventing jQuery from Loading Twice So by default, the Ajax Control Toolkit will not conflict with any existing version of jQuery used in your application. However, this does mean that if you are already using jQuery in your application then jQuery will be loaded twice. For performance reasons, you might want to avoid loading the jQuery library twice. By taking advantage of the <remove> element in the AjaxControlToolkit.config file, you can prevent the Ajax Control Toolkit from loading its version of jQuery. <ajaxControlToolkit> <scripts> <remove name="jQuery.jQuery.js" /> </scripts> <controlBundles> <controlBundle> <control name="TabContainer" /> <control name="TabPanel" /> </controlBundle> </controlBundles> </ajaxControlToolkit> Be careful here:  the name of the script being removed – jQuery.jQuery.js – is case-sensitive. If you remove jQuery then it is your responsibility to add the exact same version of jQuery back into your application.  You can add jQuery back using a <script> tag like this: <script src="Scripts/jquery-1.9.1.min.js"></script>     Make sure that you add the <script> tag before the server-side <form> tag or the Ajax Control Toolkit won’t detect the presence of jQuery. Alternatively, you can use the ToolkitScriptManager like this: <ajaxToolkit:ToolkitScriptManager runat="server"> <Scripts> <asp:ScriptReference Name="jQuery.jQuery.js" /> </Scripts> </ajaxToolkit:ToolkitScriptManager> The Ajax Control Toolkit is tested against the particular version of jQuery that is bundled with the Ajax Control Toolkit. Currently, the Ajax Control Toolkit uses jQuery version 1.9.1. If you attempt to use a different version of jQuery with the Ajax Control Toolkit then you will get the exception jQuery 1.9.1 is required in your JavaScript console window: If you need to use a different version of jQuery in the same page as the Ajax Control Toolkit then you should not use the <remove> element. Instead, allow the Ajax Control Toolkit to load its version of jQuery side-by-side with the other version of jQuery. Lots of Bug Fixes As usual, we implemented several important bug fixes with this release. The bug fixes concerned the following three controls: Tabs control – In the course of rewriting the Tabs control to use jQuery, we fixed several bugs related to the Tabs control. AjaxFileUpload control – We resolved an issue concerning the AjaxFileUpload and the TMP directory. HTMLEditor control – We updated the HTMLEditor control to use the new Ajax Control Toolkit bundling and minification framework. Summary I would like to thank the Superexpert team for their hard work on this release. Many long hours of coding and testing went into making this release possible.

    Read the article

  • Add Your Own Domain to Your WordPress.com Blog

    - by Matthew Guay
    Now that you’ve got a nice blog on WordPress.com, why not get your own domain to brand your site?  Here’s how you can easily register a new domain or move your existing domain to your WordPress site. By default, your free WordPress address is yourblog’sname.wordpress.com.  But whether this is a personal or a company blog, it can be nice to have your own domain to really brand your site and make it your own.  Or, if you already have another website and want to use WordPress as a blog for it, you could even add blog.yoursite.com or any other subdomain. Adding a domain to your WordPress.com is a paid upgrade; registering and mapping a new domain to your account costs $14.97 a year, while mapping a domain you already own to your WordPress blog costs $9.97 a year. Getting Started Login to your blog’s dashboard, click the arrow beside Upgrades in the sidebar, and select Domains. Enter the domain or subdomain you want to add to your site in the text box, and click Add domain to blog.   If you entered a new domain you want to register, WordPress will make sure the domain is available and then present you a registration form to register the domain.  Enter your information, and then click Register Domain.   Or, if you enter a domain that’s already registered, you will see the following prompt. If this domain is a domain you own, you can map it to WordPress.com.  Login to your domain registrar account and switch your nameserver to: NS1.WORDPRESS.COM NS2.WORDPRESS.COM NS3.WORDPRESS.COM Your DNS settings page for your domain may be different, depending on your registrar.  Here’s how our domain settings looked. Alternately, if you’re wanting to map a subdomain, such as blog.yoursite.com to your WordPress blog, create the following CNAME record on your domain register.  You may have to contact your domain registrar’s support to do this.  Substitute your subdomain, domain, and blog name when creating the record. subdomain.yourdomain.com. IN CNAME yourblog.wordpress.com. Once your settings are correct, click Try Again in your WordPress dashboard.  The DNS settings may take a while to update, but once WordPress can tell your DNS settings point to it, you will see the following confirmation screen.  Click Map Domain to add this domain to your WordPress blog. Now you’re ready to pay for your domain mapping or registration.  Depending on your purchase, the information and price shown may be different.  Here we’re mapping a domain we already have registered, so it costs $9.97.  Select your method of payment, enter your payment information or signin with your Paypal account, and continue as usual. Once your purchase is finished, you’ll be returned to the Domains page on WordPress.  Try going to your new domain, and make sure it opens your blog.  If it works, then click the bullet beside the new domain, and click Update Primary Domain.  Now, when people visit your WordPress site, they’ll see your new domain in the address bar.  You can still access your blog from your old yourname.wordpress.com address, but it will redirect to you new domain. Conclusion Having a personalized domain is a great way to make your blog more professional, while still taking advantage of the ease of use that WordPress.com offers.  And, if you have your own domain, you can easily move to your site traffic to a different hosting provider in the future if you need to.  The process is slightly complicated, but for $15/year we found this one of the best upgrades you could do to your WordPress.com blog. If you want to see an example of a site created with Wordpress, check out Matthew’s tech site techinch.com. And, if you’re just getting started with WordPress, check out our series on how to Start your WordPress.com blog, Personalize it, and Easily Post Content to it from anywhere. Similar Articles Productive Geek Tips Add Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareHow To Start Your Own Professional Blog with WordPressDisable Logon to Windows Computers When Not Connected to a DomainMake a Backup Copy of your Production Wordpress Blog on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • Install XP Mode with VirtualBox Using the VMLite Plugin

    - by Mysticgeek
    Would you like to run XP Mode, but prefer Sun’s VirtualBox for virtualization?  Thanks to the free VMLite plugin, you can quickly and easily run XP Mode in or alongside VirtualBox. Yesterday we showed you one method to install XP Mode in VirtualBox, unfortunately in that situation you lose XP’s activation, and it isn’t possible to reactivate it. Today we show you a tried and true method for running XP mode in VirtualBox and integrating it seamlessly with Windows 7. Note: You need to have Windows 7 Professional or above to use XP Mode in this manner. Install XP Mode Make sure you’re logged in with Administrator rights for the entire process. The first thing you’ll want to do is install XP Mode on your system (link below). You don’t need to install Windows Virtual PC. Go through and install XP Mode using the defaults. Install VirtualBox Next you’ll need to install VirtualBox 3.1.2 or higher if it isn’t installed already. If you have an older version of VirtualBox installed, make sure to update it. During setup you’re notified that your network connection will be reset. Check the box next to Always trust software from “Sun Microsystems, Inc.” then click Install.   Setup only takes a couple of minutes, and does not require a reboot…which is always nice. Install VMLite XP Mode Plugin The next thing we’ll need to install is the VMLite XP Mode Plugin. Again Installation is simple following the install wizard. During the install like with VirtualBox you’ll be asked to install the device software. After it’s installed go to the Start menu and run VMLite Wizard as Administrator. Select the location of the XP Mode Package which by default should be in C:\Program Files\Windows XP Mode. Accept the EULA…and notice that it’s meant for Windows 7 Professional, Enterprise, and Ultimate editions. Next, name the machine, choose the install folder, and type in a password. Select if you want Automatic Updates turned on or not. Wait while the process completes then click Finish.   The VMLite XP Mode will set up to run the first time. That is all there is to this section. You can run XP Mode from within the VMLite Workstation right away. XP Mode is fully activated already, and the Guest Additions are already installed, so there’s nothing else you need to do!  XP Mode is the whole way ready to use. Integration with VirtualBox Since we installed the VMLite Plugin, when you open VirtualBox you’ll see it listed as one of your machines and you can start it up from here.   Here we see VMLite XP Mode running in Sun VirtualBox. Integrate with Windows 7 To integrate it with Windows 7 click on Machine \ Seamless Mode…   Here you can see the XP menu and Taskbar will be placed on top of Windows 7. From here you can access what you need from XP Mode.   Here we see XP running on Virtual Box in Seamless Mode. We have the old XP WordPad sitting next to the new Windows 7 version of WordPad. This works so seamlessly you forget if your working in XP or Windows 7. In this example we have Windows Home Server Console running in Windows 7, while installing MSE from IE 6 in XP Mode. At the top of the screen you will still have access to the VMs controls.   You can click the button to exit Seamless Mode, or simply hit the right “CTRL+L” Conclusion This is a very slick way to run XP Mode in VirtualBox on any machine that doesn’t have Hardware Virtualization. This method also doesn’t lose the XP Mode activation and is actually extremely easy to set up. If you prefer VMware (like we do), Check out how to run XP Mode on machines without Hardware Virtualization capability, and also how to create an XP Mode for Vista and Windows 7 Home Premium. Links Download XP Mode Download VirtualBox Download VMLite XP Mode Plugin for VirtualBox (Site Registration Required) Similar Articles Productive Geek Tips Search for Install Packages from the Ubuntu Command LineHow To Run XP Mode in VirtualBox on Windows 7 (sort of)Install and Use the VLC Media Player on Ubuntu LinuxInstall Monodevelop on Ubuntu LinuxInstall Flash Plugin Manually in Firefox on Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Install XP Mode with VirtualBox Using the VMLite Plugin

    - by Mysticgeek
    Would you like to run XP Mode, but prefer Sun’s VirtualBox for virtualization?  Thanks to the free VMLite plugin, you can quickly and easily run XP Mode in or alongside VirtualBox. Yesterday we showed you one method to install XP Mode in VirtualBox, unfortunately in that situation you lose XP’s activation, and it isn’t possible to reactivate it. Today we show you a tried and true method for running XP mode in VirtualBox and integrating it seamlessly with Windows 7. Note: You need to have Windows 7 Professional or above to use XP Mode in this manner. Install XP Mode Make sure you’re logged in with Administrator rights for the entire process. The first thing you’ll want to do is install XP Mode on your system (link below). You don’t need to install Windows Virtual PC. Go through and install XP Mode using the defaults. Install VirtualBox Next you’ll need to install VirtualBox 3.1.2 or higher if it isn’t installed already. If you have an older version of VirtualBox installed, make sure to update it. During setup you’re notified that your network connection will be reset. Check the box next to Always trust software from “Sun Microsystems, Inc.” then click Install.   Setup only takes a couple of minutes, and does not require a reboot…which is always nice. Install VMLite XP Mode Plugin The next thing we’ll need to install is the VMLite XP Mode Plugin. Again Installation is simple following the install wizard. During the install like with VirtualBox you’ll be asked to install the device software. After it’s installed go to the Start menu and run VMLite Wizard as Administrator. Select the location of the XP Mode Package which by default should be in C:\Program Files\Windows XP Mode. Accept the EULA…and notice that it’s meant for Windows 7 Professional, Enterprise, and Ultimate editions. Next, name the machine, choose the install folder, and type in a password. Select if you want Automatic Updates turned on or not. Wait while the process completes then click Finish.   The VMLite XP Mode will set up to run the first time. That is all there is to this section. You can run XP Mode from within the VMLite Workstation right away. XP Mode is fully activated already, and the Guest Additions are already installed, so there’s nothing else you need to do!  XP Mode is the whole way ready to use. Integration with VirtualBox Since we installed the VMLite Plugin, when you open VirtualBox you’ll see it listed as one of your machines and you can start it up from here.   Here we see VMLite XP Mode running in Sun VirtualBox. Integrate with Windows 7 To integrate it with Windows 7 click on Machine \ Seamless Mode…   Here you can see the XP menu and Taskbar will be placed on top of Windows 7. From here you can access what you need from XP Mode.   Here we see XP running on Virtual Box in Seamless Mode. We have the old XP WordPad sitting next to the new Windows 7 version of WordPad. This works so seamlessly you forget if your working in XP or Windows 7. In this example we have Windows Home Server Console running in Windows 7, while installing MSE from IE 6 in XP Mode. At the top of the screen you will still have access to the VMs controls.   You can click the button to exit Seamless Mode, or simply hit the right “CTRL+L” Conclusion This is a very slick way to run XP Mode in VirtualBox on any machine that doesn’t have Hardware Virtualization. This method also doesn’t lose the XP Mode activation and is actually extremely easy to set up. If you prefer VMware (like we do), Check out how to run XP Mode on machines without Hardware Virtualization capability, and also how to create an XP Mode for Vista and Windows 7 Home Premium. Links Download XP Mode Download VirtualBox Download VMLite XP Mode Plugin for VirtualBox (Site Registration Required) Similar Articles Productive Geek Tips Search for Install Packages from the Ubuntu Command LineHow To Run XP Mode in VirtualBox on Windows 7 (sort of)Install and Use the VLC Media Player on Ubuntu LinuxInstall Monodevelop on Ubuntu LinuxInstall Flash Plugin Manually in Firefox on Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • AGENT: The World's Smartest Watch

    - by Rob Chartier
    AGENT: The World's Smartest Watch by Secret Labs + House of Horology Disclaimer: Most if not all of this content has been gleaned from the comments on the Kickstarter project page and comments section. Any discrepancies between this post and any documentation on agentwatches.com, kickstarter.com, etc.., those official sites take precedence. Overview The next generation smartwatch with brand-new technology. World-class developer tools, unparalleled battery life, Qi wireless charging. Kickstarter Page, Comments Funding period : May 21, 2013 - Jun 20, 2013 MSRP : $249 Other Urls http://www.agentwatches.com/ https://www.facebook.com/agentwatches http://twitter.com/agentwatches http://pinterest.com/agentwatches/ http://paper.li/robchartier/1371234640 Developer Story The first official launch of the preview SDK and emulator will happen on 20-Jun-2013.  All development will be done in Visual Studio 2012, using the .NET Micro Framework SDK 2.3.  The SDK will ship with the first round of the expected API for developers along with an emulator. With that said, there is no need to wait for the SDK.  You can download the tooling now and get started with Apps and Faces immediately.  The only thing that you will not be able to work with is the API; but for example, watch faces, you can start building the basic face rendering with the Bitmap graphics drawing in the .NET Micro Framework.   Does it look good? Before we dig into any more of the gory details, here are a few photos of the current available prototype models.   The watch on the tiny QI Charter   If you wander too far away from your phone, your watch will let you know with a vibration and a message, all but one button will dismiss the message.   An app showing the premium weather data!   Nice stitching on the straps, leather and silicon will be available, along with a few lengths to choose from (short, regular, long lengths). On to those gory details…. Hardware Specs Processor 120MHz ARM Cortex-M4 processor (ATSAM4SD32) with secondary AVR co-processor Flash & RAM 2MB of onboard flash and 160KB of RAM 1/4 of the onboard flash will be used by the OS The flash is permanent (non-volatile) storage. Bluetooth Bluetooth 4.0 BD/EDR + LE Bluetooth 4.0 is backwards compatible with Bluetooth 2.1, so classic Bluetooth functions (BD/EDR, SPP/AVRCP/PBAP/etc.) will work fine. Sensors 3D Accelerometer (Motion) ST LSM303DLHC Ambient Light Sensor Hardware power metering Vibration Motor (You can pulse it to create vibration patterns, not sure about the vibration strength - driven with PWM) No piezo/speaker or microphone. Other QI Wireless Charging, no NFC, no wall adapter included Custom LED Backlight No GPS in the watch. It uses the GPS in your phone. AGENT watch apps are deployed and debugged wirelessly from your PC via Bluetooth. RoHS, Pb-free Battery Expected to use a CR2430-sized rechargeable battery – replaceable (Mouser, Amazon) Estimated charging time from empty is 2 hours with provided charger 7 Days typical with Bluetooth on, 30 days with Bluetooth off (watch-face only mode) The battery should last at least 2 years, with 100s of charge cycles. Physical dimensions Roughly 38mm top-to-bottom on the front face 35mm left-to-right on the front face and around 12mm in depth 22mm strap Two ~1/16" hex screws to attach the watch pin The top watchcase material candidates are PVD stainless steel, brushed matte ceramic, and high-quality polycarbonate (TBD). The glass lens is mineral glass, Anti-glare glass lens Strap options Leather and silicon straps will be available Expected to have three sizes Display 1.28" Sharp Memory Display The display stays on 100% of the time. Dimensions: 128x128 pixels Buttons Custom "Pusher" buttons, they will not make noise like a mouse click, and are very durable. The top-left button activates the backlight; bottom-left changes apps; three buttons on the right are up/select/down and can be used for custom purposes by apps. Backup reset procedure is currently activated by holding the home/menu button and the top-right user button for about ten seconds Device Support Android 2.3 or newer iPhone 4S or newer Windows Phone 8 or newer Heart Rate monitors - Bluetooth SPP or Bluetooth LE (GATT) is what you'll want the heart monitor to support. Almost limitless Bluetooth device support! Internationalization & Localization Full UTF8 Support from the ground up. AGENT's user interface is in English. Your content (caller ID, music tracks, notifications) will be in your native language. We have a plan to cover most major character sets, with Latin characters pre-loaded on the watch. Simplified Chinese will be available Feature overview Phone lost alert Caller ID Music Control (possible volume control) Wireless Charging Timer Stopwatch Vibrating Alarm (possibly custom vibrations for caller id) A few default watch faces Airplane mode (by demand or low power) Can be turned off completely Customizable 3rd party watch faces, applications which can be loaded over bluetooth. Sample apps that maybe installed Weather Sample Apps not installed Exercise App Other Possible Skype integration over Bluetooth. They will provide an AGENT app for your smartphone (iPhone, Android, Windows Phone). You'll be able to use it to load apps onto the watch.. You will be able to cancel phone calls. With compatible phones you can also answer, end, etc. They are adopting the standard hands-free profile to provide these features and caller ID.

    Read the article

  • Aggregate SharePoint Event/Items into your Calendar view using Calendar Overlay

    - by eJugnoo
    One of the most common features I have seen in common use for SharePoint (prior to 2010) in Intranet environments for Team site is Calendar’s. Not only the Calendar list type, but also the ability to add a Calendar view to any list that has the desired columns to construct a Calendar – such as Start, End, Title etc. While this was all great for a single site/calendar, the problem of having to track numerous calendar’s remained. With introduction of Outlook 2007 bi-directional integration with SharePoint, and particularly the ability of Outlook to overlay calendar helped bridge the gap. Now one could connect to number of team sites, and setup Calendar overlays in Outlook using varying colours, to easily identify event source and yet benefit from the plotting of events on single Calendar view. This was all good, but each user in your Enterprise was supposed to setup in a “pull” fashion. This is good for flexibility, not so good when you need to “push” consistency and productivity (re-use). So, what was missing on SharePoint is the ability to have server-side overlay’s that everyone can see – in a single place, aggregating multiple sources. Until SharePoint 2010 arrived! Calendars Overlay in SharePoint 2010 There are Calendar lists and Calendar views. View can be created for almost all lists, as far as you have desired column’s in a list like Start, End, Title etc. to be able to describe and plot an item in a Calendar format. In SharePoint 2010, create a new Calendar list. Go to Calendar ribbon tab, and click Calendar Overlay. You get the screen with list of existing Overlay’s associated with current Calendar (list – in our case). Click on “New Calendar”… Notice the breadcrumb! You are adding Overlay to existing list (Team Calendar – in our case). You have choice of “pulling” Calendar info from an existing Calendar (list/view) in SharePoint or even from Exchange! Set standard info like a name, description and decide the colour you want for the items in aggregated Calendar overlay. Select the source site/list/view, anywhere in farm. When you select Exchange as source of Calendar, you get option to add OWA and Exchange Web Service url. I will cover details of connecting with Exchange in another post, and focus on Overlay’s with SharePoint for this one. Once you have added a new Calendar overlay to existing Calendar veiw, you get something like below for Day view, Week view, and Month view respectively Notice the Overlay colours: Now, if you decide to connect this Calendar to Outlook to sync the items, it will only sync items from main view, and not from Overlay source. So such Overlay of calendar’s is server-side aggregation only. That increases my curiosity, so I try adding the Calendar list view as a web-part on a new page. As you see, this instance of view didn’t include item from source that we had added to default Calendar view. This is – probably – due to the fact that this is a new web-part view for the page. If you want to add overlay to this one, you have to redo that from Ribbon. This also means, subject to purpose and context you get the flexibility to decide what overlay is suited. Also you can only add 10 Overlay’s to an existing view instance. Conclusion Calendar Overlay is clearly a very useful feature that fills a gap of not being able to aggregate information from multiple sources into a Calendar view within context of current items. Source of items can be existing SharePoint calendar views on any site, or even Exchange (via OWA/Exchange web services). List type for source doesn’t matter, it just need a Calendar view type available. You can have 10 overlays. Overlays are for the specific view only, and are server-side only – which means they do not get synced in Outlook. While you can drag-drop current list items, you cannot edit overlay items as they are read-only within scope of current Calendar view. You can of course click on source Overlay item to edit at the source. I’d like to hear, how you think Overlay’s will help you in your case, or how you are already using them... Enjoy SharePoint! --Sharad

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

< Previous Page | 986 987 988 989 990 991 992 993 994 995 996 997  | Next Page >