Search Results

Search found 6778 results on 272 pages for 'png hack'.

Page 209/272 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Best way to handle PHP sessions across Apache vhost wildcard domains

    - by joshholat
    I'm currently running a site that allows users to use custom domains (i.e. so instead of mysite.com/myaccount, they could have myaccount.com). They just change the A record of their domain and we then use a wildcard vhost on Apache to catch the requests from the custom domains. The setup is basically as seen below. The first vhost catches the mysite.com/myaccount requests and the second would be used for myaccount.com. As you can see, they have the exact same path and php cookie_domain. I've noticed some weird behavior surrounding the line below "#The line below me". When active, the custom domains get a new session_id every page load (that isn't the same as the non-custom domain session). However, when I comment that line out, the user keeps the same session_id on each page load, but that session_id is not the same as the one they'd see on a non-custom domain site either despite being completely on the same server. There is a sort of "hack" workaround involving redirecting the user to mysite.com/myaccount, getting the session ID, redirecting back to myaccount.com, and then using that ID on the myaccount.com. But that can get kind of messy (i.e. if the user logs out of mysite.com/myaccount, how does myaccount.com know?). For what it's worth, I'm using a database to manage the sessions (i.e. so there's no issues with being on different servers, etc, but that's irrelevant since we only use one server to handle all requests currently anyways). I'm fairly certain it is related to some sort of CSRF browser protection thing, but shouldn't it be smart enough to know it's on the same server? Note: These are subdomains, they're separate domains entirely (but on the same server). <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName mysite.local ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost> #Wildcard (custom domain) vhost <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName default ServerAlias * ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" # The line below me php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost>

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • No login prompt displayed after updating Ubuntu 10.04, broken gdm

    - by cliff
    So here's what happens: I updated my system the other day, was prompted for a reboot for the update to complete but was in the middle of working so I delayed it until after I was done. I reboot and it's broken :(. It appears to boot normally, with the following exceptions: The purple Ubuntu load screen no longer displays (though it did for the first couple of times I tried to get in). I hear the login prompt sound, but no login prompt appears. Nor is it simply "invisible" - pressing enter, typing my password, and pressing enter again do nothing. Normally my Bluetooth mouse is functional at this point, but it is not. GRUB displays recovery options for my current kernel, and for an older one (2.6.32-24). Trying to boot into .32-24 gives me an error saying "udevadm can't do something while udev is not configured". So I try solutions listed here: http://superuser.com/questions/195786/ubuntu-update-went-wrong-pc-doesnt-boot-how-can-i-repair-it Nothing I tried seemed to work, and after further Googling my hunch is that it's a problem with gdm. Please correct me if I'm wrong, I don't know all that much about how Linux/Ubuntu systems work just yet. Things I'm able to do: Boot to a live CD Ctrl-Alt-F2 after that login sound plays brings me to a console login, which I can successfully do (it's how I tried the solutions above). This works only under the current kernel. A hack I'd be willing to explore is removing the login prompt from the console, but I'd prefer to "simply" fix what's wrong. Like that guy, I need to repair the system rather than reinstall. System: Dell Inspiron 1525 Core 2 Duo Proprietary Driver for Broadcom 43xx wireless Dual-boot with Windows 7 (which is how I'm posting this, unfortunately I only have this machine and any experimenting requires constant reboots into Windows/brokenbuntu) Last package installed was Moonlight, but it appeared to install properly. Kernel: 2.6.32-25 Edit: After working with Karl's suggestions, it seems that the problem is with gdm. Error exit status 245 when attempting to sudo apt-get install --reinstall gdm, also an error processing gdm when running sudo apt-get -f install. How do I reinstall or repair gdm so that I can get back into my machine?

    Read the article

  • Nginx 500 Internal Server error on subdirectory

    - by juyoung518
    I'm getting a 500 Internal Server error only on sub directories. For example, If my website is example.com, example.com/index.php works. But example.com/phpbb/index.php doesn't work. It just turns up a blank php page. The HTTP header shows HTTP error 500 Internal Server error. If I enter example.com/phpbb/index.php/somedirectory, the index.php of my root directory shows up. This is all very strange. I have tried searching etc but nothing worked. tried re-installing nginx but not fixed. I'm sure I got the DNS configured right. My Nginx Config /sites-available/example.com server { server_name www.example.com; return 301 https://example.com$request_uri; } server { listen 443; listen 80; #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/example.com/public_html; index index.html index.php index.htm; ssl on; ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/ssl.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ssl_stapling on; resolver 8.8.8.8; add_header Strict-Transport-Security max-age=63072000; # Make site accessible from http://localhost/ server_name example.com; location ~* \.(jpg|jpeg|png|gif|ico|css|js|bmp)$ { expires 365d; add_header Cache-Control public; } if ($scheme = http) { return 301 https://example.com$request_uri; } location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } if ($http_user_agent ~ (musobot|screenshot|AhrefsBot|picsearch|Gender|HostTracker|Java/1.7.0_51|Java) ) { return 403; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } location /phpMyAdmin { rewrite ^/* /phpmyadmin last; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } } nginx.conf user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## Block spammers and other unwanted visitors ## include /etc/nginx/blockips.conf; fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m; ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 100; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log off; error_log /var/log/nginx/error.log; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ## # File Cache Settings ## open_file_cache max=5000 inactive=5m; open_file_cache_valid 2m; open_file_cache_min_uses 1; open_file_cache_errors on; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/x-js text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*;

    Read the article

  • Good maintained privacy Add-On/settings set that takes usability into account?

    - by Foo Bar
    For some weeks I've been trying to find a good set of Firefox Addons that give me a good portion of privacy/security without losing to much of usability. But I can't seem to find a nice combination of add-ons/settings that I'm happy with. Here's what I tried, together with the pros and cons that I discovered: HTTPS Everywhere: Has only pro's: just install and be happy (no interaction needed), loads known pages SLL-encrypted, is updated fairly often NoScript - Fine, but needs a lot of fine-tuning, often maintained, mainly blocks all non-HTML/CSS Content, but the author sometimes seems to do "untrustworthy" decission RequestPolicy - seems dead (last activity 6 months ago, has some annoying bugs, official support mail address is dead), but the purpose of this is really great: gives you full control over cross-site requests: blocks by default, let's you add sites to a whitelist, once this is done it works interaction-less in the background AdBlock Edge: blocks specific cross-site requests from a pre-defined whitelist (can never be fully sure, need to trust others) Disconnect: like AdBlock Edge, just looking different, has no interaction possibilities (can never be fully sure, need to trust others, can not interact even if I wanted to) Firefox own Cookie Managment (block by default, whitelist specific sites), after building own whitelist it does it's work in the background and I have full control All These addons together basically block everything unsecure. But there are a lot of redundancies: NoScript has a mixed-content blocker, but FF has it's own for a while now. Also the Cookie blocker from NoScript is reduntant to my FF-Cookie setting. NoScript also has an XSS-blocker, which is redundant to RequestPolicy. Disconnect and AdBlock are extremly redundant, but not fully. And there are some bugs (especially RequestPolicy). And RequestPolicy seems to be dead. All in all, this list is great but has these heavy drawbacks. My favourite set would be "NoScript Light" (only script blocking, without all the additonal redundant-to-other-addons hick-hack it does) + HTTPS Everywhere + RequestPolicy-clone (maintained, less buggy), because RequestPolicy makes all other "site-blockers" obsolete (because it blocks everything by default and let's me create a whitelist). But since RequestPolicy is buggy and seems to be dead I have to fallback to AdBlock Edge and Disconnect, which don't block all and and need more maintaining (whitelist updates, trust-check). Are there addons that fulfill my wishes?

    Read the article

  • How can I reverse mouse movement (X & Y axis) system-wide? (Win 7 x64)

    - by Scivitri
    Short Version I'm looking for a way to reverse the X and Y mouse axis movements. The computer is running Windows 7, x64 and Logitech SetPoint 6.32. I would like a system-level, permanent fix; such as a mouse driver modification or a registry tweak. Does anyone know of a solid way of implementing this, or how to find the registry values to change this? I'll settle quite happily for how to enable the orientation feature in SetPoint 6.32 for mice as well as trackballs. Long Version People seem never to understand why I would want this, and I commonly hear "just use the mouse right-side up!" advice. Dyslexia is not something which can be cured by "just reading things right." While I appreciate the attempts to help, I'm hoping some background may help people understand. I have a user with an unusual form of dyslexia, for whom mouse movements are backward. If she wants to move her cursor left, she will move the mouse right. If she wants the cursor to move up, she'll move the mouse down. She used to hold her mouse upside-down, which makes sophisticated clicking difficult, is terrible for ergonomics, and makes multi-button mice completely useless. In olden times, mouse drivers included an orientation feature (typically a hot-air balloon you dragged upward to set the mouse movement orientation) which could be used to set the relationship between mouse movement and cursor movement. Several years ago, mouse drivers were "improved" and this feature has since been limited to trackballs. After losing the orientation feature she went back to upside-down mousing for a bit, until finding UberOptions, a tweak for Logitech SetPoint, which would enable all features for all pointing devices. This included the orientation feature. And there was much rejoicing. Now her mouse has died, and current Logitech mice require a newer version of SetPoint for which UberOptions has not been updated. We've also seen MAF-Mouse (the developer indicated the version for 64-bit Windows does not support USB mice, yet) and Sakasa (while it works, commentary on the web indicate it tends to break randomly and often. It's also just a running program, so not system-wide.). I have seen some very sophisticated registry hacks. For example, I used to use a hack which would change the codes created by the F1-F12 keys when the F-Lock key was invented and defaulted to screwing my keyboard up. I'm hoping there's a way to flip X and Y in the registry; or some other, similar, system-level tweak out there. Another solution could be re-enabling the orientation feature for mice, as well as trackballs. It's very frustrating that input device drivers include the functionality we desperately need for an accessibilty concern, but it's been disabled in the name of making the drivers more idiot-proof.

    Read the article

  • Media Archive System with branches?

    - by Ian McEwen
    In short, how can I get VCS features (revisioning, branching, and deduplication) for a media collection that's far too large for most/all VCS systems? Background I have a 300GB music folder; unfortunately, I only have the hard drive space for this on my desktop system. However, a good portion of my collection is FLAC; therefore, I could theoretically have a space-optimized version in which I transcode all the FLAC to mp3 or some other lossy format, and use only that version on the laptop. However, a portion of my collection isn't FLAC. And that which isn't FLAC shouldn't be transcoded to an equivalent format; it won't have any space savings, which is the point. Moreover, it shouldn't be duplicated: the mp3/ogg portions of the collection should probably be exactly the same files. Thoughts One solution is to have format-specific organization of my music folders, and use some script to transcode the FLAC directory to mp3 or such into another directory. Another is some sort of hack using entirely separate copies and symbolic links for deduplication, or something similar. But these also have a disadvantage of lacking versioning; I'd like to be able to reorganize my music collection, retag things, etc. and save history. This isn't key, but would be awfully nice. I can't see it as entirely unreasonable to set up VCS hooks or something equivalent to keep directory structure synced between two copies, update tags, and transcode FLAC automatically into the space-optimized copy. Basically, the system I really want is a version control system. Two branches: one archival/desktop branch including the FLAC, one space-optimized/laptop branch without it; most VCSes would deal well with whole chunks being the same files well by compressing in a reasonable way (i.e. don't keep two copies of the same data). I could also do a lot of what I talk about above with hooks. But I don't know of any VCS that would deal with a 300GB repository with almost 20k files. Many of them would just not even initialize the whole affair; others would just do it inexpressibly slowly or otherwise badly. checkpoint looks like it's designed for something close (it's at least for media), but wouldn't do deduplication well (and I'm not convinced I'd be able to script it to do things like automatic transcoding and directory-structure syncing). So. Is there anything out there that can do all this, or should I consider it a programming project?

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • Switch flooding when bonding interfaces in Linux

    - by John Philips
    +--------+ | Host A | +----+---+ | eth0 (AA:AA:AA:AA:AA:AA) | | +----+-----+ | Switch 1 | (layer2/3) +----+-----+ | +----+-----+ | Switch 2 | +----+-----+ | +----------+----------+ +-------------------------+ Switch 3 +-------------------------+ | +----+-----------+----+ | | | | | | | | | | eth0 (B0:B0:B0:B0:B0:B0) | | eth4 (B4:B4:B4:B4:B4:B4) | | +----+-----------+----+ | | | Host B | | | +----+-----------+----+ | | eth1 (B1:B1:B1:B1:B1:B1) | | eth5 (B5:B5:B5:B5:B5:B5) | | | | | | | | | +------------------------------+ +------------------------------+ Topology overview Host A has a single NIC. Host B has four NICs which are bonded using the balance-alb mode. Both hosts run RHEL 6.0, and both are on the same IPv4 subnet. Traffic analysis Host A is sending data to Host B using some SQL database application. Traffic from Host A to Host B: The source int/MAC is eth0/AA:AA:AA:AA:AA:AA, the destination int/MAC is eth5/B5:B5:B5:B5:B5:B5. Traffic from Host B to Host A: The source int/MAC is eth0/B0:B0:B0:B0:B0:B0, the destination int/MAC is eth0/AA:AA:AA:AA:AA:AA. Once the TCP connection has been established, Host B sends no further frames out eth5. The MAC address of eth5 expires from the bridge tables of both Switch 1 & Switch 2. Switch 1 continues to receive frames from Host A which are destined for B5:B5:B5:B5:B5:B5. Because Switch 1 and Switch 2 no longer have bridge table entries for B5:B5:B5:B5:B5:B5, they flood the frames out all ports on the same VLAN (except for the one it came in on, of course). Reproduce If you ping Host B from a workstation which is connected to either Switch 1 or 2, B5:B5:B5:B5:B5:B5 re-enters the bridge tables and the flooding stops. After five minutes (the default bridge table timeout), flooding resumes. Question It is clear that on Host B, frames arrive on eth5 and exit out eth0. This seems ok as that's what the Linux bonding algorithm is designed to do - balance incoming and outgoing traffic. But since the switch stops receiving frames with the source MAC of eth5, it gets timed out of the bridge table, resulting in flooding. Is this normal? Why aren't any more frames originating from eth5? Is it because there is simply no other traffic going on (the only connection is a single large data transfer from Host A)? I've researched this for a long time and haven't found an answer. Documentation states that no switch changes are necessary when using mode 6 of the Linux interface bonding (balance-alb). Is this behavior occurring because Host B doesn't send any further packets out of eth5, whereas in normal circumstances it's expected that it would? One solution is to setup a cron job which pings Host B to keep the bridge table entries from timing out, but that seems like a dirty hack.

    Read the article

  • Local admin password recovery: Windows Vista

    - by Jim Dennis
    I am faced with an unsettling situation. A friend of my father's has rather suddenly become a widower. Naturally they've taken care of the bank accounts and all the normal mundane things that people have been doing for a century or so. However, she was the computer user of the household. He was aware that they had some online banking stuff and bill paying stuff ... and that she spent lots of time on FaceBook and stuff like that. However, he doesn't know what her local passwords were (actually only vaguely aware that her couple of desktop and couple of laptop system even had passwords). He's never heard of "admin" passwords so that's no good either. In the past I've used KNOPPIX and the old LinuxCare "bootable business card" to recover NT passwords. I've never done this with MS Windows Vista. So, I'm looking for the best advice on how to do this. Naturally I do have physical access to the systems (the two laptops are charging across the room from me; and her old desktop systems are, naturally, still back at his place). Getting it right is much more important than fast or easy (I don't want to mess up those filesystems and possibly lose some photos or other stuff that he or his kids or grandkids will want). (BTW: if anyone things this is some social engineering hack to play upon the sympathies of the community to get the information I'm asking for ... think about it for a minute. I know about IRC and the "warez" boards. I know I can find this stuff out there if I dig enough. I'm just asking here because it'll hopefully be faster and, secondarily to raise awareness. As more of us put more of our lives online ... as we get older and as places like FaceBook continue to widen the appeal of computing to a broader segment of older people ... we are, as computer nerds, going to see a lot more of this. Survivors will needs us to be careful, sensitive and ethically responsible as they try to recover those bits of legacy during their bereavement. I can now tell you, first hand, it sucks!)

    Read the article

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • Windows 7 & Virtual PC and Internet (gateway) problems on host PC

    - by Mufasa
    I upgraded to Windows 7 on a PC that is a few years old. The CPU was one revision away from having Hyper-V on it. So, I had to install Microsoft Virtual PC 2007 (v6.0.156.0) to run full XP instances instead of the seamless XP virtualization that is advertised so much. That's fine though; the 'older' version is useful since I use it to run different versions of the whole XP/IE stack for testing. (I'm a web developer.) ...And for the one 16-bit application we still use at the office for scheduling. * sigh * The virtual instances work fine, including networking. My issue is that after a reboot or coming out of sleep mode, my host Windows 7 won't connect to the Internet. It will connect to the local network fine. If I disable the "Virtual Machine Network Services" item (I'll call "VMNS" from here on) in the LAN Connection properties box, it starts working. But than the Virtual PC instances lose their network connectivity. If I re-enable VMNS again in the same instance, everything works (Internet on host and in the virtualized instances). But after the next reboot/sleep cycle this starts over. The route table gave me a clue though. When doing a cycle w/ VMNS enabled: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.3.51 20 0.0.0.0 0.0.0.0 10.0.10.10 10.0.3.51 276 ... After VMNS is disabled, the first route goes away. I assume that is for VMNS to intercept virtualized instance's network connections and forward them correctly? Just a guess though. More info: I checked my Firewall settings and Services (because I'm sort of a control nazi and turn off a lot) but couldn't find anything that made sense and if turned on changed anything. So it might be something there I'm missing, but I don't know what. My current hacked solution: So, I figured I'd mess with the routes myself to see if that helped, it did. If I run a route delete 0.0.0.0 on the universal (0.0.0.0) gateway routes, and add back in just the 2nd line with route add 0.0.0.0 mask 0.0.0.0 10.0.10.10--the one that points to my actual gateway (10.0.10.10)--then I don't have to mess with the disable/enable cycle of VMNS, and everything works. Running those two commands is faster then bringing up connection options and disabling and re-enabling VMNS, but I still don't want to have use that hack script every boot either. (Oh, and I also tried messing with hard-coding TCP/IP settings in my network adapter, including setting high metrics, etc., but that didn't help either.) Any suggestions on the right way to fix this?

    Read the article

  • Ask How-To Geek: Dropbox in the Start Menu, Understanding Symlinks, and Ripping TV Series DVDs

    - by Jason Fitzpatrick
    This week we take a look at how to incorporate Dropbox into your Windows Start Menu, understanding and using symbolic links, and how to rip your TV series DVDs right to unique and high-quality episode files. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Add Drobox to Your Start Menu Dear How-To Geek, I use Dropbox all the time and would like to add it right onto my start menu along side the other major shortcuts like Documents, Pictures, etc. It seems like adding Dropbox into the menu should be part of the Dropbox installation package! Sincerely, Dropboxing in Des Moines Dear Dropboxing, We agree, it would be a nice installation option. As it stands you’re going to have to do a little simple hacking to get Dropbox nestled neatly into your start menu. The hack isn’t super elegant but when you’re done you’ll have the link you want and it’ll look like it was there all along. Check out this step-by-step guide here in order to take an existing Library shortcut and rework it to be a Dropbox link. Understanding and Using Symbolic Links Dear How-To Geek, I was talking to a coworker the other day about an issue I’d been having with a media center application I’m running. He suggested using symbolic links to better organize my media and make it easier for the application to access my collection. I had no idea what he was talking about and never got a chance to bug him about it later. Can you clear up this whole symbolic links business for me? I’ve been using computers for years and I’ve never even heard of it! Sincerely, Symbolic Who? Dear Symbolic, Symbolic links aren’t commonly used by many Windows users which is why you likely haven’t run into the concept. Symbolic links are essentially supercharged shortcuts—the newly introduced Windows library system is really just a type of symbolic link system. You can use symbolic links to do all sorts of neat stuff like link folders to your Dropbox folder, organize media, and more. The concept of symbolic links is pretty simple but the execution can be really tricky. We’d suggest reading over our guide to creating symbolic links in Windows 7, Windows XP, and Ubunutu to get a clearer idea what you’re getting into. Rip Your TV DVDs into Handy Episode Files Dear How-To Geek, My wife got me an iPod for Christmas and I still haven’t got around to filling it up. I have tons of entire TV show seasons on DVD and would like to get them on the iPod but I have absolutely no idea where to start. How do I get the shows off the discs? I thought it would be as easy to import the TV shows into iTunes as it is to import tracks off a CD but I was totally wrong. I tried downloading some applications to rip them but those didn’t work at all. Very frustrating! Surely there is an easy and/or automated way to do this, right? Sincerely, Free My DVDs Dear DVDs, Oh man is this a frustration we can relate to. It’s inordinately difficult to get movies and TV shows off physical media and into digital (and portable media player-friendly) formats. There are a multitude of ways to rip DVDs and quite a few applications out there (some good, some mediocre, and some outright malware). We’d recommend a two-part punch to solve your ripping woes. You’ll need a copy of DVDFab to strip away the protections on the discs and rip the disc and Handbrake to load the disc image and convert the files. It’s not quite as smooth as the CD-to-iTunes workflow but it’s still pretty easy. Check out all the steps and settings you’ll want to toggle here. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup]

    Read the article

  • Using jQuery, CKEditor, AJAX in ASP.NET MVC 2

    - by Ray Linder
    After banging my head for days on a “A potentially dangerous Request.Form value was detected" issue when post (ajax-ing) a form in ASP.NET MVC 2 on .NET 4.0 framework using jQuery and CKEditor, I found that when you use the following: Code Snippet $.ajax({     url: '/TheArea/Root/Add',     type: 'POST',     data: $("#form0Add").serialize(),     dataType: 'json',     //contentType: 'application/json; charset=utf-8',     beforeSend: function ()     {         pageNotify("NotifyMsgContentDiv", "MsgDefaultDiv", '<img src="/Content/images/content/icons/busy.gif" /> Adding post, please wait...', 300, "", true);         $("#btnAddSubmit").val("Please wait...").addClass("button-disabled").attr("disabled", "disabled");     },     success: function (data)     {         $("#btnAddSubmit").val("Add New Post").removeClass("button-disabled").removeAttr('disabled');         redirectToUrl("/Exhibitions");     },     error: function ()     {         pageNotify("NotifyMsgContentDiv", "MsgErrorDiv", '<img src="/Content/images/content/icons/cross.png" /> Could not add post. Please try again or contact your web administrator.', 6000, "normal");         $("#btnAddSubmit").val("Add New Post").removeClass("button-disabled").removeAttr('disabled');     } }); Notice the following: Code Snippet data: $("#form0Add").serialize(), You may run into the “A potentially dangerous Request.Form value was detected" issue with this. One of the requirements was NOT to disable ValidateRequest (ValidateRequest=”false”). For this project (and any other project) I felt it wasn’t necessary to disable ValidateRequest. Note: I’ve search for alternatives for the posting issue and everyone and their mothers continually suggested to disable ValidateRequest. That bothers me – a LOT. So, disabling ValidateRequest is totally out of the question (and always will be).  So I thought to modify how the “data: “ gets serialized. the ajax data fix was simple, add a .html(). YES!!! IT WORKS!!! No more “potentially dangerous” issue, ajax form posts (and does it beautifully)! So if you’re using jQuery to $.ajax() a form with CKEditor, remember to do: Code Snippet data: $("#form0Add").serialize().html(), or bad things will happen. Also, don’t forget to set Code Snippet config.htmlEncodeOutput = true; for the CKEditor config.js file (or equivalent). Example: Code Snippet CKEDITOR.editorConfig = function( config ) {     // Define changes to default configuration here. For example:     // config.language = 'fr';     config.uiColor = '#ccddff';     config.width = 640;     config.ignoreEmptyParagraph = true;     config.resize_enabled = false;     config.skin = 'kama';     config.enterMode = CKEDITOR.ENTER_BR;       config.toolbar = 'MyToolbar';     config.toolbar_MyToolbar =     [         ['Bold', 'Italic', 'Underline'],         ['JustifyLeft', 'JustifyCenter', 'JustifyRight', 'JustifyBlock', 'Font', 'FontSize', 'TextColor', 'BGColor'],         ['BulletedList', 'NumberedList', '-', 'Outdent', 'Indent'],         '/',         ['Scayt', '-', 'Cut', 'Copy', 'Paste', 'Find'],         ['Undo', 'Redo'],         ['Link', 'Unlink', 'Anchor', 'Image', 'Flash', 'HorizontalRule'],         ['Table'],         ['Preview', 'Source']     ];     config.htmlEncodeOutput = true; }; Happy coding!!! Tags: jQuery ASP.NET MVC 2 ASP.NET 4.0 AJAX

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • Comix is an Awesome Comics Archive Viewer for Linux

    - by Asian Angel
    Do you have a terrific collection of comics in electronic form but need a great app to view them with? If you have a Linux system then we have the perfect app for you…Comix, the open source comic reading powerhouse. For our example we installed Comix on our Ubuntu 10.10 system. Just go to the Ubuntu Software Center and conduct a quick search. When you go to install Comix in the Ubuntu Software Center, make sure to scroll all the way to the bottom and select Unarchiver for .rar files. The listing appears as a “non-free version” for some reason, but displays as free once selected. Odd, but nothing to worry about in the end… Once Comix is installed you can find it in the Graphics Section of the Ubuntu Menu. Comix also comes with a nice set of options to let you customize the app to best suit those important comic reading needs. Here is a comprehensive list of the features this little comic reading powerhouse packs into one easy to use package: Fullscreen mode, double page mode, fit-to-screen mode, zooming and scrolling, rotation and mirroring, magnification lens, changeable image scaling quality, image enhancement, can read right-to-left to fit manga, etc., caching for faster page flipping, bookmarks support, customizable GUI, archive comments support, archive converter, thumbnail browser, standards compliant, available in multiple languages (English, Swedish, Simplified Chinese, Spanish, Brazilian Portuguese, & German), reads “JPEG, PNG, TIFF, GIF, BMP, ICO, XPM, & XBM” image formats, reads “ZIP & tar archives natively, RAR archives through the unrar program” runs on Linux, FreeBSD, NetBSD, and virtually any other UNIX-like OS, and more! Have fun reading those comics on your favorite Linux system! Interested in learning more about Comix? Then be certain to drop by the homepage! Comix Homepage Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • Creating a bare bone web-browser: After the html parser, javascript parser, etc have done their work, how do I display the content of the webpage?

    - by aste123
    This is a personal project to learn computer programming. I took a look at this: https://www.udacity.com/course/viewer#!/c-cs262 The following is the approach taken in it: Abstract Syntax Tree is created. But javascript is still not completely broken down in order not to confuse with the html tags. Then the javascript interpreter is called on it. Javascript interpreter stores the text from the write() and document.write() to be used later. Then a graphics library in Python is called which will convert everything to a pdf file and then we convert it into png or jpeg and then display it. My Question: I want to display the actual text in a window (which I will design later) like firefox or chrome does instead of image files so that the data can be selected, copied, etc by the user of the browser. How do I accomplish this? In other words, what are the other elements of a bare bone web browser that I am missing? I would prefer to implement most of the stuff in C++ although if things seem too complicated I might go with Python to save time and create a prototype and later creating another bare bone browser in C++ and add more features. This is a project to learn more. I do realize we already have lots of reliable browsers like firefox, etc. The way I feel it is done: I think after all the broken down contents have been created by the parsers and interpreters, I will need to access them individually from within the window's code (like qt) and then decide upon a good way to display them. I am not sure if it is the way this should be done. Additions after useful comment by Kilian Foth: I found this page: http://friendlybit.com/css/rendering-a-web-page-step-by-step/ 14. A DOM tree is built out of the broken HTML 15. New requests are made to the server for each new resource that is found in the HTML source (typically images, style sheets, and JavaScript files). Go back to step 3 and repeat for each resource. 16. Stylesheets are parsed, and the rendering information in each gets attached to the matching node in the DOM tree 17. Javascript is parsed and executed, and DOM nodes are moved and style information is updated accordingly 18. The browser renders the page on the screen according to the DOM tree and the style information for each node 19. You see the page on the screen I need help with step 18. How do I do that? How much work do Webkit and Gecko do? I want to use a readymade layout renderer for step number 18 and not for anything that comes before that.

    Read the article

  • Slide Men&uacute; con Jquery &amp; Asp.net

    - by Jason Ulloa
    En este post, trabajaremos una parte que en ocasiones se nos hace un “mundo”, la creación de menús en nuestras aplicaciones web. Nuestro objetivo será evitar la utilización de elementos que puedan ocasionar que la página se vuelva un poco lenta, para ello utilizaremos jquery que viene siendo una herramienta muy semejante a ajax para crear nuestro menú. Para crear nuestro menús de ejemplo necesitaremos de tres elementos: 1. CSS, para aplicar los estilos. 2. Jquery para realizar las animaciones. 3. Imágenes para armar los menús. Nuestro primer Paso: Será agregar la referencias a nuestra página, para incluir los CSS y los Scripts. 1: <link rel="stylesheet" type="text/css" href="Styles/jquery.hrzAccordion.defaults.css" /> 2: <link rel="stylesheet" type="text/css" href="Styles/jquery.hrzAccordion.examples.css" /> 3: <script type="text/javascript" src="JS/jquery-1.3.2.js"></script> 1:  2: <script type="text/javascript" src="JS/jquery.easing.1.3.js"> 1: </script> 2: <script type="text/javascript" src="JS/jquery.hrzAccordion.js"> 1: </script> 2: <script type="text/javascript" src="JS/jquery.hrzAccordion.examples.js"> </script> Nuestro segundo paso: Será la definición del html que contendrá los elementos de tipo imagen y el texto. 1: <li> 2: <div class="handle"> 3: <img src="images/title1.png" /></div> 4: <img src="images/image_test.gif" align="left" /> 5: <h3> 6: Contenido 1</h3> 7: <p> 8: Contenido de Ejemplo 1.<br> 9: <br> 10: Agregue todo el contenido aquí</p> 11: </li> En el código anterior, hemos definido un elemento que contendrá una imagen que se mostrará dentro del menú una vez desplegado. Una etiqueta H3 de html que tendrá el Título y un elemento <p> para definir el parrado de texto. Como vemos es algo realmente sencillo. Si queremos agregar mas elementos, será nada mas copiar el div anterior y agregar nuevo contenido. Al final, nuestro menú debe lucir algo así: Por último, les dejo el ejemplo para descargar

    Read the article

  • Prevent Changing the Screen Saver and Wallpaper in Windows 7

    - by Mysticgeek
    Sometimes you might not want users to have the ability to change Screen Savers and Wallpaper on Windows 7 workstations. Today we look at how to prevent them from changing either one or both. You might administer computers in your home or small office and find it annoying when users continuously change the wallpaper and Screen Savers to something obnoxious. A lot of times they might be inexperienced users and download these so-called “wonderful and free” Screen Saver/Wallpaper packages from shady sites that include loads of Spyware. Preventing users from changing them is another helpful tool to avoid wasteful time spent switching things back. Prevent Changing Screensavers & Wallpaper Using Group Policy Editor  Note: This method uses Group Policy which is not available in Home versions on Windows 7. Open the Start Menu and enter gpedit.msc into the Search box and hit Enter. When Local Group Policy Editor opens, navigate to User Configuration \ Administrative Templates \ Control Panel \ Personalization. Then in the right column double-click on Prevent changing desktop background. Now check the radio button next to Enabled, then click OK. Back on the Group Policy Screen, double-click on Prevent changing screen saver. In the next screen select the radio button next to Enable, click OK, then close out of Group Policy Editor. Now when a user goes into the Personalization section, the Desktop Background hyperlink is now grayed out and inactive. Notice the message One or more of the settings on this page has been disabled by the system administrator at the bottom of the section. If they click to change the Screen Saver, an error message will pop up letting them know the function is disabled. Prevent Changing Screensavers & Wallpaper Using a Registry Hack You can also make a couple Registry changes to prevent users from changing the Wallpaper & Screen Saver…which will work on Home versions of Windows 7. Before making any Registry changes make sure you back it up first. Open the Registry by typing regedit into the Search box in the Start menu and hit Enter. First we’ll start with the Wallpaper. Navigate to HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\System and create a new String Value and name it Wallpaper. Then modify the Value data to point to the location of the Wallpaper you want it to always be. Where in this example it’s our main wallpaper on our local drive…then click OK. Now let’s make sure they can’t change the Screen Saver. In the same Registry location, we need to make a new DWORD (32-bit) Value. Give it the Value name of NoDispScrSavPage and the value data of “1” and click OK. Close out of the Registry and restart the machine or simply log off then back on again for the changes to take effect. Results For the Wallpapers, a user can still go in and see the selections, however if they try to change it to something else… It will just go back to the Personalization screen and no changes will be made, as we set the value to only be the background we specified. If the user tries to make a change to the Screen Saver, the hyperlink will be grayed out and inactive, and the message One or more of the settings on this page has been disabled by the system administrator will be displayed at the bottom of the section. Conclusion If you’re tired of users changing the Wallpaper and Screen Saver, and want another way to help avoid Malware, locking down these settings can help a lot. Again, before making any changes to the Registry, make sure to back it up. These settings should work in Vista and XP as well. Similar Articles Productive Geek Tips Save 1-4% More Battery Life With Windows Vista Battery SaverCustomize Your Windows Vista Logon ScreenEnable "Ubuntu Style" Logons in Windows VistaManage the Delete Confirmation Dialog box in Windows 7Dual Monitors: Use a Different Wallpaper on Each Desktop TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform]

    - by Asian Angel
    Are you looking for an easy way to create custom sized thumbnail images for use in blog posts, photo albums, and more? Whether is it a single image or a CD full, Simple Image Resizer is the right app to get the job done for you. To add the new PPA for Simple Image Resizer open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and click on the PPA listing for Rafael Sachetto on the left (highlighted with red in the image). The listing for Simple Image Resizer will be right at the top…click Install to add the program to your system. After the installation is complete you can find Simple Image Resizer listed as Sir in the Graphics sub-menu. When you open Simple Image Resizer you will need to browse for the directory containing the images you want to work with, select a destination folder, choose a target format and prefix, enter the desired pixel size for converted images, and set the quality level. Convert your image(s) when ready… Note: You will need to determine the image size that best suits your needs before-hand. For our example we chose to convert a single image. A quick check shows our new “thumbnailed” image looking very nice. Simple Image Resizer can convert “into and from” the following image formats: .jpeg, .png, .bmp, .gif, .xpm, .pgm, .pbm, and .ppm Command Line Installation Note: For older Ubuntu systems (9.04 and previous) see the link provided below. sudo add-apt-repository ppa:rsachetto/ppa sudo apt-get update && sudo apt-get install sir Links Note: Simple Image Resizer is available for Ubuntu, Slackware Linux, and Windows. Simple Image Resizer PPA at Launchpad Simple Image Resizer Homepage Command Line Installation for Older Ubuntu Systems Bonus The anime wallpaper shown in the screenshots above can be found here: The end where it begins [DesktopNexus] Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic]

    Read the article

  • Share Your Top 30 Visited Domains with Visitation Cloud for Firefox

    - by Asian Angel
    Curious about the domains that you visit most or perhaps you want a way to share that information on a social website? Now you can see and share the 30 most visited domains in your browser’s history with the Visitation Cloud extension. Accessing Visitation Cloud As soon as you install the extension you can get started using it. Depending on how your browser’s UI is set up there are three methods for accessing Visitation Cloud: a “Visitation Cloud Button” inserted at the end of your “Bookmarks Toolbar”, a menu listing in the “Tools Menu”, and a “Toolbar Button” (not shown here). Visitation Cloud in Action As soon as you activate Visitation Cloud a new window will appear with your top domains displayed in a cloud format. Keep in mind that this is more than just a static image…each listing is actually a clickable link. Clicking on any of the listings will open that domain in a new tab or window depending on your particular browser settings. If you feel that you have a great set of links and want to share it with your friends then that is easy to do. Right click anywhere within the Visitation Cloud Window and select “Save as…”. The “cloud image” can be saved in “.png, .jpg, or Scalable Vector Graphics (.svg)” format. For our example we chose the “.svg format”. Perhaps you love the set of links but not the layout…right click and select “Randomize” to change how the cloud looks. Here is our cloud after being “Randomized”. Things definitely got moved around… Accessing the Visitation Cloud Image in other Browsers Once you have your “cloud image” saved you can share it with friends or save it for your own future use in other browsers. Here is our “cloud image” open in Opera Browser with link opening in progress. The same “cloud image” open in Google Chrome. Very nice… Conclusion While this may not be something that everyone will use Visitation Cloud does make for a rather unique, interesting, & fun way to access and share your most visited domains. Links Download the Visitation Cloud extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Fix "Security Error: Domain Name Mismatch" Warning in FirefoxAdd Variety to Your Searches with Search CloudletRestore Your Missing/Deleted Smart Bookmarks Folder in Firefox 3Blocking Spam from International Senders in Windows Vista MailSee Where a Package is Installed on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Share High Res Photos using Divvyshot Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar

    Read the article

  • How to share two keyboard on the same laptop, french iso layout and usa ansi layout keyboard with usb?

    - by reyman64
    I recently buy a "noppoo choc mini" with this specific ANSI US-INTERNATIONAL pc84 layout. This specific keyboard have only 84 key , a 60% (compact tenkeyless) reduced layout My problem is simple, there is no keyboard layout into Ubuntu 12.04 which correspond to this usa normal ansi layout ... so it's the same problem with reduced version and only 84 key .. I search a template of normal ANSI US-INTERNATIONAL for xmodmap/xkb, and after i can try to manually map the other key. I search on google, and i don't find any other user which have same problem, so it's seem i have not the good keywoard to search this information.. Edit 1 : Here you can see there is probably a bug in ubuntu, because the layout for USA with dead key is not correct ! I have this : http://minus.com/lEdKMrsNAwkVA And other users have this for the same layout : http://i.stack.imgur.com/p52XG.png EDIT 2 It seems after a "sudo dpkg-reconfigure keyboard-configuration" : french standard keyboard pc105 + precision M65 keyboard from dell laptop Now i can see the good us layout in parameters, but i cannot have the iso layout for french usage... EDIT 3 Ok, after reboot i understand the probleme, i explain. I have one laptop with integrated french keyboard, and i want to use my usb keyboard which use a usa ANSI layout. It seem it's impossible in ubuntu and "dpkg-reconfigure keyboard-configuration" to share two different physical layout (ANSI and EU ISO) on the same computer ... EDIT4 Ok, it seems i can switch the physical layout (ISO <- ANSI) with this command in terminal : setxkbmap -layout us setxkbmap -layout us -variant alt-intl an setxkbmap -layout fr It's very complicated qnd it seem ubuntu 12.04 have big problem with keyboard manager ... because all works great with these two commands, without ANY change into the system parameters keyboard !!! Second bug ? The image of the layout for fr is buggy, the layout is not ISO, but i can press on the letter "< " at the left of right shift without any problem ! You can see the image here (french alternative with ANSI layout ? it's crazy ?) : http: //minus.com/lXsDJwoeyWAfF Can you help me on this point ? I'm lost with xkb, and manual mapping is very complicated ... Thanks a lot, SR

    Read the article

  • BoundingBox created from mesh to origin, making it bigger

    - by Gunnar Södergren
    I'm working on a level-based survival game and I want to design my scenes in Maya and export them as a single model (with multiple meshes) into XNA. My problem is that when I try to create Bounding Boxes(for Collision purposes) for each of the meshes, the are calculated from origin to the far-end of the current mesh, so to speak. I'm thinking that it might have something to do with the position each mesh brings from Maya and that it's interpreted wrongly... or something. Here's the code for when I create the boxes: private static BoundingBox CreateBoundingBox(Model model, ModelMesh mesh) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); BoundingBox result = new BoundingBox(); foreach (ModelMeshPart meshPart in mesh.MeshParts) { BoundingBox? meshPartBoundingBox = GetBoundingBox(meshPart, boneTransforms[mesh.ParentBone.Index]); if (meshPartBoundingBox != null) result = BoundingBox.CreateMerged(result, meshPartBoundingBox.Value); } result = new BoundingBox(result.Min, result.Max); return result; } private static BoundingBox? GetBoundingBox(ModelMeshPart meshPart, Matrix transform) { if (meshPart.VertexBuffer == null) return null; Vector3[] positions = VertexElementExtractor.GetVertexElement(meshPart, VertexElementUsage.Position); if (positions == null) return null; Vector3[] transformedPositions = new Vector3[positions.Length]; Vector3.Transform(positions, ref transform, transformedPositions); for (int i = 0; i < transformedPositions.Length; i++) { Console.WriteLine(" " + transformedPositions[i]); } return BoundingBox.CreateFromPoints(transformedPositions); } public static class VertexElementExtractor { public static Vector3[] GetVertexElement(ModelMeshPart meshPart, VertexElementUsage usage) { VertexDeclaration vd = meshPart.VertexBuffer.VertexDeclaration; VertexElement[] elements = vd.GetVertexElements(); Func<VertexElement, bool> elementPredicate = ve => ve.VertexElementUsage == usage && ve.VertexElementFormat == VertexElementFormat.Vector3; if (!elements.Any(elementPredicate)) return null; VertexElement element = elements.First(elementPredicate); Vector3[] vertexData = new Vector3[meshPart.NumVertices]; meshPart.VertexBuffer.GetData((meshPart.VertexOffset * vd.VertexStride) + element.Offset, vertexData, 0, vertexData.Length, vd.VertexStride); return vertexData; } } Here's a link to the picture of the mesh(The model holds six meshes, but I'm only rendering one and it's bounding box to make it clearer: http://www.gsodergren.se/portfolio/wp-content/uploads/2011/10/Screen-shot-2011-10-24-at-1.16.37-AM.png The mesh that I'm refering to is the Cubelike one. The cylinder is a completely different model and not part of any bounding box calculation. I've double- (and tripple-)-checked that this mesh corresponds to this bounding box. Any thoughts on what I'm doing wrong?

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >