Search Results

Search found 13180 results on 528 pages for 'non interactive'.

Page 475/528 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • Adding local users / passwords on Kerberized Linux box

    - by Brian
    Right now if I try to add a non-system user not in the university's Kerberos realm I am prompted for a Kerberos password anyway. Obviously there is no password to be entered, so I just press enter and see: passwd: Authentication token manipulation error passwd: password unchanged Typing passwd newuser has the same issue with the same message. I tried using pwconv in the hopes that only a shadow entry was needed, but it changed nothing. I want to be able to add a local user not in the realm and give them a local password without being bothered about Kerberos. I am on Ubuntu 10.04. Here are my /etc/pam.d/common-* files (the defaults that Ubuntu's pam-auth-update package generates): account # here are the per-package modules (the "Primary" block) account [success=1 new_authtok_reqd=done default=ignore] pam_unix.so # here's the fallback if no module succeeds account requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around account required pam_permit.so # and here are more per-package modules (the "Additional" block) account required pam_krb5.so minimum_uid=1000 # end of pam-auth-update config auth # here are the per-package modules (the "Primary" block) auth [success=2 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config password # here are the per-package modules (the "Primary" block) password requisite pam_krb5.so minimum_uid=1000 password [success=1 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 # here's the fallback if no module succeeds password requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around password required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config session # here are the per-package modules (the "Primary" block) session [default=1] pam_permit.so # here's the fallback if no module succeeds session requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around session required pam_permit.so # and here are more per-package modules (the "Additional" block) session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so # end of pam-auth-update config

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Can not get sound over hdmi in kubuntu 9.10

    - by user32509
    I have used a hdmi cable to connect my lcd (which is connected with my speakers) with my nvida 275 gtx grafic card. I can not get the sound output to work. The hardware itself is working probably - I tested it under windows. Currently I am running Kubuntu 9.10 64 with Nvidia 190.53. The sound output worked fine before I installed the hdmi connection. (German output - i can change it, if you tell me how :)) aplay -l **** Liste von PLAYBACK Geräten **** Karte 0: Intel [HDA Intel], Gerät 0: ALC889A Analog [ALC889A Analog] Untergeordnete Geräte: 1/1 Untergeordnetes Gerät '0: subdevice #0 Karte 0: Intel [HDA Intel], Gerät 1: ALC889A Digital [ALC889A Digital] Untergeordnete Geräte: 1/1 Untergeordnetes Gerät '0: subdevice #0 aplay -L front:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog Front speakers surround40:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Intel,DEV=0 HDA Intel, ALC889A Digital IEC958 (S/PDIF) Digital Audio Output null Discard all samples (playback) or generate zero samples (capture) pulse Playback/recording through the PulseAudio sound server And i disabled mute in kmix an all channels :) Edit: lspci -v ... 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) Subsystem: Giga-byte Technology Device a022 Flags: bus master, fast devsel, latency 0, IRQ 22 Memory at ea400000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel <?> Capabilities: [130] Root Complex Link <?> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel ... cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.20. lsmod | grep snd_hda_intel snd_hda_intel 31880 2 snd_hda_codec 87584 2 snd_hda_codec_realtek,snd_hda_intel snd_pcm 93160 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss snd 77096 16 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device snd_page_alloc 10928 2 snd_hda_intel,snd_pcm I think I am missing the something-hdmi module? Is there such a thing?

    Read the article

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • If Nvidia Shield can stream a game via wifi, why can I not do the same via ethernet to any other PC?

    - by Enigma
    I think it absurd that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. I can only assume that there must be some reason behind this or a limitation preventing this, but what? 150mbps wifi is in no way superior to a 1000mbps LAN connection aside from well wireless mobility. Not only that but I have a secondary laptop and desktop which should by hardware comparison completely outperform anything the Tegra in the Nvidia Shield can do. Is this all just a marketing scheme to force people to buy the shield for the streaming benefit? Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable for me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? (*) - perhaps this isn't the first but afaik it is the first complete package.

    Read the article

  • Why is BIND giving me a SERVFAIL in this case? (Notes inside)

    - by imaginative
    Woke up this morning to a bunch of the following: root@foo:/etc/bind# dig @1.2.3.4 foo.example.com ; <<>> DiG 9.6.1-P2 <<>> @1.2.3.4 foo.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36121 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;;foo.example.com. IN A ;; Query time: 0 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Thu Apr 1 09:57:59 2010 ;; MSG SIZE rcvd: 31 Some background on the fictitious "1.2.3.4". It's a slave name server in my nameserver "farm". Technically I have ns1 (being the master) and ns2/ns3. Currently ns1/ns2 are down for maintenance, so I left ns3 at it serving live traffic. That's the point, DNS is supposed to be resilient. Now the odd part is, "1.2.3.4" was serving requests for example.com just fine for the last 4-5 days. This morning I get a phone call that it's non-responsive. After investigation I see the message you see above, SERVFAIL. I looked into the zone file and saw the following: example.com IN SOA ns1.example.com. hostmaster.mail.example.com. ( I wondered if at this point that the nameserver thought it was not authoritative over example.com and adjusted it to the following: example.com IN SOA ns3.example.com. hostmaster.mail.example.com. ( After that, it started responding again for all authoritative queries for example.com. I have no idea why. I thought these things were supposed to be normalized upon zone transfer from ns1 - ns3? Can someone please example why this happened and how to prevent it from happening in the future? I've never had a similar problem, and because I don't understand it well, I might be missing some critical information in this question. So please let me know if I can further add any detail to make things clearer as well. One more thing to note: I have other domains that I'm authoritative for that have their SOA still saying ns1.example.com. and not ns3.example.com. Those domains are serving requests just fine! Is it a matter of time before they stop also and I have to change SOA to ns3.example.com? Is this also only required because ns1 and ns2 are currently offline?

    Read the article

  • Creating a fallback error page for nginx when root directory does not exist

    - by Ruirize
    I have set up an any-domain config on my nginx server - to reduce the amount of work needed when I open a new site/domain. This config allows me to simply create a folder in /usr/share/nginx/sites/ with the name of the domain/subdomain and then it just works.™ server { # Catch all domains starting with only "www." and boot them to non "www." domain. listen 80; server_name ~^www\.(.*)$; return 301 $scheme://$1$request_uri; } server { # Catch all domains that do not start with "www." listen 80; server_name ~^(?!www\.).+; client_max_body_size 20M; # Send all requests to the appropriate host root /usr/share/nginx/sites/$host; index index.html index.htm index.php; location / { try_files $uri $uri/ =404; } recursive_error_pages on; error_page 400 /errorpages/error.php?e=400&u=$uri&h=$host&s=$scheme; error_page 401 /errorpages/error.php?e=401&u=$uri&h=$host&s=$scheme; error_page 403 /errorpages/error.php?e=403&u=$uri&h=$host&s=$scheme; error_page 404 /errorpages/error.php?e=404&u=$uri&h=$host&s=$scheme; error_page 418 /errorpages/error.php?e=418&u=$uri&h=$host&s=$scheme; error_page 500 /errorpages/error.php?e=500&u=$uri&h=$host&s=$scheme; error_page 501 /errorpages/error.php?e=501&u=$uri&h=$host&s=$scheme; error_page 503 /errorpages/error.php?e=503&u=$uri&h=$host&s=$scheme; error_page 504 /errorpages/error.php?e=504&u=$uri&h=$host&s=$scheme; location ~ \.(php|html) { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_intercept_errors on; } } However there is one issue that I'd like to resolve, and that is when a domain that doesn't have a folder in the sites directory, nginx throws an internal 500 error page because it cannot redirect to /errorpages/error.php as it doesn't exist. How can I create a fallback error page that will catch these failed requests?

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Nginx SSL redirect for one specific page only

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance! EDIT: Clearing the cache / cookies seemed to work. Is this the right way to do http/https redirection? I sort of made it up as I went along.

    Read the article

  • Wifi network stopped being visible (and usable) (Linksys wag320n)

    - by s427
    Basically, my wifi network simply stopped working for no apparent reason. It doesn't appear in the list of the available networks anymore. I can see all my neighbors' networks, but not mine. It's as if it doesn't exist anymore. The internet connection (non-wifi), which goes through the same modem/router, is fine though. I already had a similar problem about one year ago (see here: Wifi network SSID not visible ), just after buying this very modem. I finally got it to work after performing two factory resets and getting rid of the Cisco "Magic" software; but this time it's not working. I use a linksys router-modem (WAG320N) which is directly connected (via network cable) to my desktop computer (Windows 7). I have (mainly) two devices that use the wifi network: my phone (Samsung Galaxy Nexus) and an Asus tablet (TF201, aka Transformer Prime). I also resurrected an old laptop computer (Dell, running Windows XP) to test that, and it doesn't see anything either (apart from the 20 other wifi networks, of course ^^). This wifi network was working just fine and has been for about a year. I haven't touched the modem settings so I have no idea what's causing the problem. I tried: making my phone "forget" about my network, hoping it would see it again after that: no luck. re-entering the network informations (SSID/password) manually on my phone: still no luck (says it's not in range) exporting the modem configuration, resetting the modem (factory reset, via modem admin), restarting it, importing the configuration: nope. factory reset, turning it off for 15 minutes, restarting, re-factory reset, and entering the configuration manually: still nothing. Has anybody experienced something similar before? Have you any suggestion to fix that? Thanks in advance. PS: to clear things up, here are the settings of my modem regarding wifi: Basic wireless settings: Configuration: manual Radio Band: 2.4GHz Wireless Network Mode: B/G/N-Mixed SSID: s427 Channel Bandwidth: Wide - 40 MHz Channel Wide Channel: 9 - 2.452GHz Standard Channel: 11 - 2.462GHz SSID Broadcast: Enable Advanced Wireless Settings AP Isolation: Disable Authentication Type: Auto Basic Rate: Default Transmission Rate: Auto N Transmission Rate: Auto CTS Protection Mode: Disable Beacon Interval: 100 DTIM Interval: 1 Fragmentation Threshold: 2346 RTS Threshold: 2346

    Read the article

  • Wrapping a point-to-point link

    - by user3712955
    I'm using a pair of IP radios (non-WiFi) to bridge my office engineering LAN (172.0.0.0/8) to a lab in another building. The radios work fine, but they expose a web management interface I'd like to hide, and they also generate traffic (ARP, STP, and more) that I need to keep off my (very, very clean) LAN segments. I have some ARM-Linux boards (similar to Beagle/Panda/RasPi) running Ubuntu, and I've put one at each end of the link, between the radio and the LAN. Each of the boards has 2 wired Ethernet interfaces, eth0 and eth1. The LAN segments are connected to eth0, and the radios are connected to eth1. I'd like to accomplish the following: Keep radio-originated traffic off my LAN segments! Hide all services provided by the radio (web, ssh, etc.) Transparently pass all traffic between the LAN segments (including things like ARP). The above also applies to the ARM-Linux boards: No stray traffic my LAN from them either! I'd like the system to look like a switch: LAN packets arriving at one eth0 appear at the other. And neither eth0 should have an IP address: The working system should behave like a CAT6 cable with some latency (instead of ARM boards and radios). Unfortunately, I'm confused about how to properly configure the ARM Ubuntu systems. What I'm guessing I need is a bridge on each board (br0?) and a VLAN (vlan0 or eth0.0?) to isolate the LAN traffic from everything else as it passes through the ARM boards and the radios. Then I need some kind of a firewall to block sending anything out eth0 that isn't from the other eth0 (via the VLAN). I've looked at the ip and ebtables commands (especially -t broute). While the concepts sorta-kinda make sense, I'm completely lost in the details. Edit: In the perverse case that a system on one of my LAN segments has the same IP address as one of the radios, or as eth1 on the ARM-Ubuntu boards, a VLAN won't work. Which I believe means I need to tunnel all traffic between the two eth0 interfaces to get that "like a wire" behavior. Help? Finally, I'd like to have a way to temporarily expose services on the ARM boards (ssh) and the radios (web) for maintenance purposes. Ideally, it would expose an IP address with ssh available on port 22. Once connected, I figure I'd start an X11 session and run a browser on the ARM board to access the radios. Or something. I could login via the console to enable/disable this, or perhaps could use a GPIO to trigger a script. I feel I've identified most of the pieces needed to make all this happen, but I have no idea how to combine them to make a working system. Thanks!

    Read the article

  • [CentOS 4.8] nslookup resolves domains to IPs, but I can't get a response to pings to external servers

    - by Beco
    I have a fresh install of CentOS 4.8 running on an internal development server. I haven't done anything to it besides setting up sudoers and SSH. I can SSH into the server and from there resolve domains to IPs and ping internal servers, but for some reason I don't get any response from pinging external servers. The software firewall is disabled, and the problem is present with both static and DHCP-assigned network configurations. The network domain controller is a Windows Server 2003 box. $ nslookup google.com Server: 10.254.2.5 Address: 10.254.2.5#53 Non-authoritative answer: Name: google.com Address: 74.125.47.147 Name: google.com Address: 74.125.47.99 <etc...> 10.254.2.5 is the Win2K3 server. $ ping google.com PING google.com (74.125.47.106) 56(84) bytes of data. It just hangs here indefinitely. $ cat /etc/resolv.conf ; generated by /sbin/dhclient-script search <...snip...>.local nameserver 10.254.2.5 nameserver 10.254.2.124 10.254.2.124 is the backup DC server, which is currently off and tombstoned by this point. The snipped section is our company name. # ifconfig eth0 Link encap:Ethernet HWaddr <snip> inet addr:10.254.2.101 Bcast:10.254.2.255 Mask:255.255.255.0 inet6 addr: <snip>/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:80066 errors:0 dropped:0 overruns:0 frame:0 TX packets:4421 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7810133 (7.4 MiB) TX bytes:590550 (576.7 KiB) Interrupt:225 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8104 (7.9 KiB) TX bytes:8104 (7.9 KiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.254.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.254.2.5 0.0.0.0 UG 0 0 0 eth0 And, for good measure, a snapshot of the current ethernet config via the system-config-network GUI. Edit: I don't yet have enough rep to post images, so here's a link. Sorry! system-config-network snapshot I'm pretty green when it comes to setting up *nix dev servers and network configuration in general, so please let me know if I've left out critical information, or posted information I shouldn't have posted. Thanks!

    Read the article

  • What does a Software Developer actually do?

    - by chobo2
    Hi I am graduating from my Computer Science degree in a few weeks from now!! I started to look for my first job. For the last couple years I gotten really into web programming(Asp.net). My first choice would be to get a junior asp.net MVC developer but I don't any companies in my area use MVC yet or if they do they are not hiring. So my second choice would be a junior asp.net Webforms developer. My other choices after that would be forms applications, mobile applications using .Net and C#. As you can see I am looking for something with .Net. I spent the last couple years doing .Net projects for school, on my free time and love the Language and it would pain me right now to switch to something like php. So now I found a posting in my area for an Entry Software Developer. I like the fact that they are using .net and that it is entry job(I never worked in this industry and never had more then like a tutoring job so I want to for like intermediate jobs). Posting Are you looking for an exciting challenge within a dynamic, people-oriented culture where you can launch your technical career? Company Name Inc. is a technology consulting company, located in Canada, that designs, develops, and delivers real-time interactive applications accessed via the Internet as well as back-end tools to support these applications. Company Name provides a combination of out-of-the-box and customized solutions to an expanding list of partners and customers. POSITION SUMMARY As a member of our team, the successful candidate will be responsible for helping us increase the quality and stability of our software systems by working jointly and directly with both the Software Development teams and the QA Team. The primary mission of this role will be to substantially enhance our test automation suite. The incumbent will design and program automated tests (unit, integration, system, stress and load) in Visual Studio using C# and will develop sound processes that help us identify and resolve defects as early as possible. The successful incumbent will help us improve and enhance system functionality, reliability, performance and scalability. This role is specifically designed for an eager, bright, new graduate who is looking for a stepping stone into a software engineering role. We promote from within and invite new graduates to apply for this important position - which may lead to new opportunities. We also offer a generous professional development plan to help you on your way. You will be a key part of a team of experts that is responsible for improving the quality of our software by: • Designing, writing, and executing test plans and programmatic tests in Visual Studio using C# and NUnit for functional testing of our code, new features, regression, and performance test procedures. • Working with the engineers to design and build the stress and load testing framework which emulates tens and even hundreds of thousands of concurrent users via a distributed network interfacing with our Load Testing Lab. • Interfacing with both the Development Team and the QA Team to ensure risks are identified and managed. • Mentoring and leading the QA Team in programmatic test automation technologies and tools. MUST HAVE SKILLS / QUALIFICATIONS: • Diploma or higher Degree in Computer Science, or equivalent formal training. • Fundamental C# programming skills. • Knowledge of Internet technologies and Microsoft Windows platforms. • Knowledge of PC hardware. • Excellent communication skills (both oral and written). • Self-starter who takes initiative, requires minimal supervision, can handle multiple simultaneous tasks. • Detail-oriented, able to concentrate, and work quickly. • Proven diagnostic, analytical, and problem solving skills. NICE TO HAVE SKILLS: • Exposure to Visual Studio Team System or Visual Studio Test Edition. • Exposure in C# using NUnit. • Exposure to NUnit, HTTPUnit, and other automation tool suites. • Exposure to Performance/Stress/Load Testing. • Good understanding of relational databases (MS SQL Server). • Familiar with video and online multi-player games. As part of our team you will have the opportunity to work with a supportive team of experts, drive your own success, and ride the wave as we continually expand our team of experts. If you are interested in this opportunity, please send your resume to [email protected] with “Entry Level Software Developer” in the subject line. So that is the posting. To me it sounds like it is QA job. I don't have anything against QA jobs but alot of them seems to be your just clicking buttons and running scripts. Is this what a typical software developer does? Like I am so on the fence to apply for this job. On one side I am not sure how much programming I would be doing. Like I want to be at least half the time programming otherwise my skills will never improve since I will never be programming in teams and stuff. At the same time I have no experience in the industry so on the other side I am thinking just go for it and then maybe a year later try to get a full programming job(provided that I got the job). Yet if I am not programming in that job then that experience will not help me for the next job I find as I will be back a square one.

    Read the article

  • PHP - XML Feed get print values

    - by danit
    Here is my feed: <entry> <id>http://api.visitmix.com/OData.svc/Sessions(guid'816995df-b09a-447a-9391-019512f643a0')</id> <title type="text">Building Web Applications with Microsoft SQL Azure</title> <summary type="text">SQL Azure provides a highly available and scalable relational database engine in the cloud. In this demo-intensive and interactive session, learn how to quickly build web applications with SQL Azure Databases and familiar web technologies. We demonstrate how you can quickly provision, build and populate a new SQL Azure database directly from your web browser. Also, see firsthand several new enhancements we are adding to SQL Azure based on the feedback we&#x2019;ve received from the community since launching the service earlier this year.</summary> <published>2010-01-25T00:00:00-05:00</published> <updated>2010-03-05T01:07:05-05:00</updated> <author> <name /> </author> <link rel="edit" title="Session" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Speakers" type="application/atom+xml;type=feed" title="Speakers" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers"> <m:inline> <feed> <title type="text">Speakers</title> <id>http://api.visitmix.com/OData.svc/Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers</id> <updated>2010-03-25T11:56:06Z</updated> <link rel="self" title="Speakers" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers" /> <entry> <id>http://api.visitmix.com/OData.svc/Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')</id> <title type="text">David-Robinson</title> <summary type="text"></summary> <updated>2010-03-25T11:56:06Z</updated> <author> <name>David Robinson</name> </author> <link rel="edit-media" title="Speaker" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')/$value" /> <link rel="edit" title="Speaker" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Sessions" type="application/atom+xml;type=feed" title="Sessions" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')/Sessions" /> <category term="EventModel.Speaker" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="image/jpeg" src="http://live.visitmix.com/Content/images/speakers/lrg/default.jpg" /> <m:properties xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices"> <d:SpeakerID m:type="Edm.Guid">3395ee85-d994-423c-a726-76b60a896d2a</d:SpeakerID> <d:SpeakerFirstName>David</d:SpeakerFirstName> <d:SpeakerLastName>Robinson</d:SpeakerLastName> <d:LargeImage m:null="true"></d:LargeImage> <d:SmallImage m:null="true"></d:SmallImage> <d:Twitter m:null="true"></d:Twitter> </m:properties> </entry> </feed> </m:inline> </link> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Tags" type="application/atom+xml;type=feed" title="Tags" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Tags" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Files" type="application/atom+xml;type=feed" title="Files" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Files" /> <category term="EventModel.Session" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:SessionID m:type="Edm.Guid">816995df-b09a-447a-9391-019512f643a0</d:SessionID> <d:Location>Breakers L</d:Location> <d:Type>Seminar</d:Type> <d:Code>SVC07</d:Code> <d:StartTime m:type="Edm.DateTime">2010-03-17T12:00:00</d:StartTime> <d:EndTime m:type="Edm.DateTime">2010-03-17T13:00:00</d:EndTime> <d:Slug>SVC07</d:Slug> <d:CreatedDate m:type="Edm.DateTime">2010-01-26T18:14:24.687</d:CreatedDate> <d:SourceID m:type="Edm.Guid">cddca9b7-6830-4d06-af93-5fd87afb67b0</d:SourceID> </m:properties> </content> </entry> I want to print the: Session Title (Building Web Applications with Microsoft SQL Azure) The Author (David Robinson) The Location (Breakers L) And display the speakers image (http://live.visitmix.com/Content/images/speakers/lrg/default.jpg) I presume I can use filegetcontents and then transform to simplexmlstring, but I dont know how to get the deeper items in I want, like Author, and image. Any chance of a bit of coding genius here?

    Read the article

  • Maven struts2 modular archetype failing to generate !

    - by Xinus
    I am trying to generate struts 2 modular archetype using maven but always getting error as archetype not present here is a full output : C:\Users\Administrator>mvn archetype:generate [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus .velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => 'false'. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => 'false'. [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Interactive mode [INFO] No archetype defined. Using maven-archetype-quickstart (org.apache.maven. archetypes:maven-archetype-quickstart:1.0) Choose archetype: 1: internal -> appfuse-basic-jsf (AppFuse archetype for creating a web applicati on with Hibernate, Spring and JSF) 2: internal -> appfuse-basic-spring (AppFuse archetype for creating a web applic ation with Hibernate, Spring and Spring MVC) 3: internal -> appfuse-basic-struts (AppFuse archetype for creating a web applic ation with Hibernate, Spring and Struts 2) 4: internal -> appfuse-basic-tapestry (AppFuse archetype for creating a web appl ication with Hibernate, Spring and Tapestry 4) 5: internal -> appfuse-core (AppFuse archetype for creating a jar application wi th Hibernate and Spring and XFire) 6: internal -> appfuse-modular-jsf (AppFuse archetype for creating a modular app lication with Hibernate, Spring and JSF) 7: internal -> appfuse-modular-spring (AppFuse archetype for creating a modular application with Hibernate, Spring and Spring MVC) 8: internal -> appfuse-modular-struts (AppFuse archetype for creating a modular application with Hibernate, Spring and Struts 2) 9: internal -> appfuse-modular-tapestry (AppFuse archetype for creating a modula r application with Hibernate, Spring and Tapestry 4) 10: internal -> maven-archetype-j2ee-simple (A simple J2EE Java application) 11: internal -> maven-archetype-marmalade-mojo (A Maven plugin development proje ct using marmalade) 12: internal -> maven-archetype-mojo (A Maven Java plugin development project) 13: internal -> maven-archetype-portlet (A simple portlet application) 14: internal -> maven-archetype-profiles () 15: internal -> maven-archetype-quickstart () 16: internal -> maven-archetype-site-simple (A simple site generation project) 17: internal -> maven-archetype-site (A more complex site project) 18: internal -> maven-archetype-webapp (A simple Java web application) 19: internal -> jini-service-archetype (Archetype for Jini service project creat ion) 20: internal -> softeu-archetype-seam (JSF+Facelets+Seam Archetype) 21: internal -> softeu-archetype-seam-simple (JSF+Facelets+Seam (no persistence) Archetype) 22: internal -> softeu-archetype-jsf (JSF+Facelets Archetype) 23: internal -> jpa-maven-archetype (JPA application) 24: internal -> spring-osgi-bundle-archetype (Spring-OSGi archetype) 25: internal -> confluence-plugin-archetype (Atlassian Confluence plugin archety pe) 26: internal -> jira-plugin-archetype (Atlassian JIRA plugin archetype) 27: internal -> maven-archetype-har (Hibernate Archive) 28: internal -> maven-archetype-sar (JBoss Service Archive) 29: internal -> wicket-archetype-quickstart (A simple Apache Wicket project) 30: internal -> scala-archetype-simple (A simple scala project) 31: internal -> lift-archetype-blank (A blank/empty liftweb project) 32: internal -> lift-archetype-basic (The basic (liftweb) project) 33: internal -> cocoon-22-archetype-block-plain ([http://cocoon.apache.org/2.2/m aven-plugins/]) 34: internal -> cocoon-22-archetype-block ([http://cocoon.apache.org/2.2/maven-p lugins/]) 35: internal -> cocoon-22-archetype-webapp ([http://cocoon.apache.org/2.2/maven- plugins/]) 36: internal -> myfaces-archetype-helloworld (A simple archetype using MyFaces) 37: internal -> myfaces-archetype-helloworld-facelets (A simple archetype using MyFaces and facelets) 38: internal -> myfaces-archetype-trinidad (A simple archetype using Myfaces and Trinidad) 39: internal -> myfaces-archetype-jsfcomponents (A simple archetype for create c ustom JSF components using MyFaces) 40: internal -> gmaven-archetype-basic (Groovy basic archetype) 41: internal -> gmaven-archetype-mojo (Groovy mojo archetype) Choose a number: (1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/2 4/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41) 15: : 8 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] The defined artifact is not an archetype [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Sat Mar 27 08:22:38 IST 2010 [INFO] Final Memory: 8M/21M [INFO] ------------------------------------------------------------------------ C:\Users\Administrator> What can be the problem ?

    Read the article

  • Windows Impersonation failed

    - by skprocks
    I am using following code to implement impersonation for the particular windows account,which is failing.Please help. using System.Security.Principal; using System.Runtime.InteropServices; public partial class Source_AddNewProduct : System.Web.UI.Page { [DllImport("advapi32.dll", SetLastError = true)] static extern bool LogonUser( string principal, string authority, string password, LogonSessionType logonType, LogonProvider logonProvider, out IntPtr token); [DllImport("kernel32.dll", SetLastError = true)] static extern bool CloseHandle(IntPtr handle); enum LogonSessionType : uint { Interactive = 2, Network, Batch, Service, NetworkCleartext = 8, NewCredentials } enum LogonProvider : uint { Default = 0, // default for platform (use this!) WinNT35, // sends smoke signals to authority WinNT40, // uses NTLM WinNT50 // negotiates Kerb or NTLM } //impersonation is used when user tries to upload an image to a network drive protected void btnPrimaryPicUpload_Click1(object sender, EventArgs e) { try { string mDocumentExt = string.Empty; string mDocumentName = string.Empty; HttpPostedFile mUserPostedFile = null; HttpFileCollection mUploadedFiles = null; string xmlPath = string.Empty; FileStream fs = null; StreamReader file; string modify; mUploadedFiles = HttpContext.Current.Request.Files; mUserPostedFile = mUploadedFiles[0]; if (mUserPostedFile.ContentLength >= 0 && Path.GetFileName(mUserPostedFile.FileName) != "") { mDocumentName = Path.GetFileName(mUserPostedFile.FileName); mDocumentExt = Path.GetExtension(mDocumentName); mDocumentExt = mDocumentExt.ToLower(); if (mDocumentExt != ".jpg" && mDocumentExt != ".JPG" && mDocumentExt != ".gif" && mDocumentExt != ".GIF" && mDocumentExt != ".jpeg" && mDocumentExt != ".JPEG" && mDocumentExt != ".tiff" && mDocumentExt != ".TIFF" && mDocumentExt != ".png" && mDocumentExt != ".PNG" && mDocumentExt != ".raw" && mDocumentExt != ".RAW" && mDocumentExt != ".bmp" && mDocumentExt != ".BMP" && mDocumentExt != ".TIF" && mDocumentExt != ".tif") { Page.RegisterStartupScript("select", "<script language=" + Convert.ToChar(34) + "VBScript" + Convert.ToChar(34) + "> MsgBox " + Convert.ToChar(34) + "Please upload valid picture file format" + Convert.ToChar(34) + " , " + Convert.ToChar(34) + "64" + Convert.ToChar(34) + " , " + Convert.ToChar(34) + "WFISware" + Convert.ToChar(34) + "</script>"); } else { int intDocLen = mUserPostedFile.ContentLength; byte[] imageBytes = new byte[intDocLen]; mUserPostedFile.InputStream.Read(imageBytes, 0, mUserPostedFile.ContentLength); //xmlPath = @ConfigurationManager.AppSettings["ImagePath"].ToString(); xmlPath = Server.MapPath("./../ProductImages/"); mDocumentName = Guid.NewGuid().ToString().Replace("-", "") + System.IO.Path.GetExtension(mUserPostedFile.FileName); //if (System.IO.Path.GetExtension(mUserPostedFile.FileName) == ".jpg") //{ //} //if (System.IO.Path.GetExtension(mUserPostedFile.FileName) == ".gif") //{ //} mUserPostedFile.SaveAs(xmlPath + mDocumentName); //Remove commenting till upto stmt xmlPath = "./../ProductImages/"; to implement impersonation byte[] bytContent; IntPtr token = IntPtr.Zero; WindowsImpersonationContext impersonatedUser = null; try { // Note: Credentials should be encrypted in configuration file bool result = LogonUser(ConfigurationManager.AppSettings["ServiceAccount"].ToString(), "ad-ent", ConfigurationManager.AppSettings["ServiceAccountPassword"].ToString(), LogonSessionType.Network, LogonProvider.Default, out token); if (result) { WindowsIdentity id = new WindowsIdentity(token); // Begin impersonation impersonatedUser = id.Impersonate(); mUserPostedFile.SaveAs(xmlPath + mDocumentName); } else { throw new Exception("Identity impersonation has failed."); } } catch { throw; } finally { // Stop impersonation and revert to the process identity if (impersonatedUser != null) impersonatedUser.Undo(); // Free the token if (token != IntPtr.Zero) CloseHandle(token); } xmlPath = "./../ProductImages/"; xmlPath = xmlPath + mDocumentName; string o_image = xmlPath; //For impersoantion uncomment this line and comment next line //string o_image = "../ProductImages/" + mDocumentName; ViewState["masterImage"] = o_image; //fs = new FileStream(xmlPath, FileMode.Open, FileAccess.Read); //file = new StreamReader(fs, Encoding.UTF8); //modify = file.ReadToEnd(); //file.Close(); //commented by saurabh kumar 28may'09 imgImage.Visible = true; imgImage.ImageUrl = ViewState["masterImage"].ToString(); img_Label1.Visible = false; } //e.Values["TemplateContent"] = modify; //e.Values["TemplateName"] = mDocumentName.Replace(".xml", ""); } } catch (Exception ex) { ExceptionUtil.UI(ex); Response.Redirect("errorpage.aspx"); } } } The code on execution throws system.invalidoperation exception.I have provided full control to destination folder to the windows service account that i am impersonating.

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • AGENT: The World's Smartest Watch

    - by Rob Chartier
    AGENT: The World's Smartest Watch by Secret Labs + House of Horology Disclaimer: Most if not all of this content has been gleaned from the comments on the Kickstarter project page and comments section. Any discrepancies between this post and any documentation on agentwatches.com, kickstarter.com, etc.., those official sites take precedence. Overview The next generation smartwatch with brand-new technology. World-class developer tools, unparalleled battery life, Qi wireless charging. Kickstarter Page, Comments Funding period : May 21, 2013 - Jun 20, 2013 MSRP : $249 Other Urls http://www.agentwatches.com/ https://www.facebook.com/agentwatches http://twitter.com/agentwatches http://pinterest.com/agentwatches/ http://paper.li/robchartier/1371234640 Developer Story The first official launch of the preview SDK and emulator will happen on 20-Jun-2013.  All development will be done in Visual Studio 2012, using the .NET Micro Framework SDK 2.3.  The SDK will ship with the first round of the expected API for developers along with an emulator. With that said, there is no need to wait for the SDK.  You can download the tooling now and get started with Apps and Faces immediately.  The only thing that you will not be able to work with is the API; but for example, watch faces, you can start building the basic face rendering with the Bitmap graphics drawing in the .NET Micro Framework.   Does it look good? Before we dig into any more of the gory details, here are a few photos of the current available prototype models.   The watch on the tiny QI Charter   If you wander too far away from your phone, your watch will let you know with a vibration and a message, all but one button will dismiss the message.   An app showing the premium weather data!   Nice stitching on the straps, leather and silicon will be available, along with a few lengths to choose from (short, regular, long lengths). On to those gory details…. Hardware Specs Processor 120MHz ARM Cortex-M4 processor (ATSAM4SD32) with secondary AVR co-processor Flash & RAM 2MB of onboard flash and 160KB of RAM 1/4 of the onboard flash will be used by the OS The flash is permanent (non-volatile) storage. Bluetooth Bluetooth 4.0 BD/EDR + LE Bluetooth 4.0 is backwards compatible with Bluetooth 2.1, so classic Bluetooth functions (BD/EDR, SPP/AVRCP/PBAP/etc.) will work fine. Sensors 3D Accelerometer (Motion) ST LSM303DLHC Ambient Light Sensor Hardware power metering Vibration Motor (You can pulse it to create vibration patterns, not sure about the vibration strength - driven with PWM) No piezo/speaker or microphone. Other QI Wireless Charging, no NFC, no wall adapter included Custom LED Backlight No GPS in the watch. It uses the GPS in your phone. AGENT watch apps are deployed and debugged wirelessly from your PC via Bluetooth. RoHS, Pb-free Battery Expected to use a CR2430-sized rechargeable battery – replaceable (Mouser, Amazon) Estimated charging time from empty is 2 hours with provided charger 7 Days typical with Bluetooth on, 30 days with Bluetooth off (watch-face only mode) The battery should last at least 2 years, with 100s of charge cycles. Physical dimensions Roughly 38mm top-to-bottom on the front face 35mm left-to-right on the front face and around 12mm in depth 22mm strap Two ~1/16" hex screws to attach the watch pin The top watchcase material candidates are PVD stainless steel, brushed matte ceramic, and high-quality polycarbonate (TBD). The glass lens is mineral glass, Anti-glare glass lens Strap options Leather and silicon straps will be available Expected to have three sizes Display 1.28" Sharp Memory Display The display stays on 100% of the time. Dimensions: 128x128 pixels Buttons Custom "Pusher" buttons, they will not make noise like a mouse click, and are very durable. The top-left button activates the backlight; bottom-left changes apps; three buttons on the right are up/select/down and can be used for custom purposes by apps. Backup reset procedure is currently activated by holding the home/menu button and the top-right user button for about ten seconds Device Support Android 2.3 or newer iPhone 4S or newer Windows Phone 8 or newer Heart Rate monitors - Bluetooth SPP or Bluetooth LE (GATT) is what you'll want the heart monitor to support. Almost limitless Bluetooth device support! Internationalization & Localization Full UTF8 Support from the ground up. AGENT's user interface is in English. Your content (caller ID, music tracks, notifications) will be in your native language. We have a plan to cover most major character sets, with Latin characters pre-loaded on the watch. Simplified Chinese will be available Feature overview Phone lost alert Caller ID Music Control (possible volume control) Wireless Charging Timer Stopwatch Vibrating Alarm (possibly custom vibrations for caller id) A few default watch faces Airplane mode (by demand or low power) Can be turned off completely Customizable 3rd party watch faces, applications which can be loaded over bluetooth. Sample apps that maybe installed Weather Sample Apps not installed Exercise App Other Possible Skype integration over Bluetooth. They will provide an AGENT app for your smartphone (iPhone, Android, Windows Phone). You'll be able to use it to load apps onto the watch.. You will be able to cancel phone calls. With compatible phones you can also answer, end, etc. They are adopting the standard hands-free profile to provide these features and caller ID.

    Read the article

  • Configuring Novel iPrint client on ubuntu 13.10

    - by Mahdi Sadeghi
    Recently I have struggled a lot to make Novel iPrint client to work on my laptop. I need it to use Follow Me printers in our university(you can take your print form any printer). Using this tutorial from Novel, I tried to convert the rpm package and install it on Ubuntu 13.04 & 13.10. The post install script from installing generated deb package had a typo which I saw in post install messages and I fixed that. Now I have the client running. To see the client UI I installed cinnamon desktop(because unity does not have system tray and old solutions did'nt work to whitelist Novel clinet). I have iPrint plugin installed on firefox as well(I copied the shared object files to plugin directories). I try installing printers from provided ipp URL(which lists available printers on the server) with no success. After clicking the printer name I see this: I have various errors: Formerly firefox used to asked my network username/password for installing SSL printer but now it returns this: iPrint Printer - The printer is currently not available. However I can install non-SSL version but the printer location is either empty or points to: file:///dev/null even if I change it to the exact address which I see on working machines still it prints nothing. I have tried the novel command line tool, iprntcmd to print. It is being installed at: /opt/novell/iprint/bin/ msadeghi@werkstatt:/opt/novell/iprint/bin$ ./iprntcmd --addprinter ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me\ -\ IPP iprntcmd v05.04.00 Adding printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP. Added printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP successfully. It adds the printer with empty location and again no print. What I found interesting is the log file at ~/.iprint/errors.txt with strange errors which I hope somebody here can understand. When I try to install the SSL printer I receive these logs(note that HP is my local printer and has nothing to do with iprint): Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 676 Group Info: CLIB Error Code: 0 (0x0) User ID: 1000 Error Msg: Success Debug Msg: MyCupsDoFileRequest - httpReconnect failed (0) Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 1293 Group Info: CUPS-IPP Error Code: 1282 (0x502) User ID: 1000 Error Msg: iPrint Printer - The printer is currently not available. Debug Msg: MyCupsDoFileRequest - IPP SERVICE UNAVAILABLE Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified I should say that my friend can print using the same instructions on CrunchBang easily and another guy on 12.04 LTS but with more struggling. It worked for me on linux mint maya with my old laptop as well. Is there anybody out there who can help me to solve these problems? I am really disappointed with Novell and our university support. PS. I had the same problemwith 13.04. No matter if I am within the network or I connect with VPN, I have the same issues.

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Getting TF215097 error after modifying a build process template in TFS Team Build 2010

    - by Jakob Ehn
    When embracing Team Build 2010, you typically want to define several different build process templates for different scenarios. Common examples here are CI builds, QA builds and release builds. For example, in a contiuous build you often have no interest in publishing to the symbol store, you might or might not want to associate changesets and work items etc. The build server is often heavily occupied as it is, so you don’t want to have it doing more that necessary. Try to define a set of build process templates that are used across your company. In previous versions of TFS Team Build, there was no easy way to do this. But in TFS 2010 it is very easy so there is no excuse to not do it! :-)   I ran into a scenario today where I had an existing build definition that was based on our release build process template. In this template, we have defined several different build process parameters that control the release build. These are placed into its own sectionin the Build Process Parameters editor. This is done using the ProcessParameterMetadataCollection element, I will explain how this works in a future post.   I won’t go into details on these parametes, the issue for this blog post is what happens when you modify a build process template so that it is no longer compatible with the build definition, i.e. a breaking change. In this case, I removed a parameter that was no longer necessary. After merging the new build process template to one of the projects and queued a new release build, I got this error:   TF215097: An error occurred while initializing a build for build definition <Build Definition Name>: The values provided for the root activity's arguments did not satisfy the root activity's requirements: 'DynamicActivity': The following keys from the input dictionary do not map to arguments and must be removed: <Parameter Name>.  Please note that argument names are case sensitive. Parameter name: rootArgumentValues <Parameter Name> was the parameter that I removed so it was pretty easy to understand why the error had occurred. However, it is not entirely obvious how to fix the problem. When open the build definition everything looks OK, the removed build process parameter is not there, and I can open the build process template without any validation warnings. The problem here is that all settings specific to a particular build definition is stored in the TFS database. In TFS 2005, everything that was related to a build was stored in TFS source control in files (TFSBuild.proj, WorkspaceMapping.xml..). In TFS 2008, many of these settings were moved into the database. Still, lots of things were stored in TFSBuild.proj, such as the solution and configuration to build, wether to execute tests or not. In TFS 2010, all settings for a build definition is stored in the database. If we look inside the database we can see what this looks like. The table tbl_BuildDefinition contains all information for a build definition. One of the columns is called ProcessParameters and contains a serialized representation of a Dictionary that is the underlying object where these settings are stoded. Here is an example:   <Dictionary x:TypeArguments="x:String, x:Object" xmlns="clr-namespace:System.Collections.Generic;assembly=mscorlib" xmlns:mtbwa="clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <mtbwa:BuildSettings x:Key="BuildSettings" ProjectsToBuild="$/PathToProject.sln"> <mtbwa:BuildSettings.PlatformConfigurations> <mtbwa:PlatformConfigurationList Capacity="4"> <mtbwa:PlatformConfiguration Configuration="Release" Platform="Any CPU" /> </mtbwa:PlatformConfigurationList> </mtbwa:BuildSettings.PlatformConfigurations> </mtbwa:BuildSettings> <mtbwa:AgentSettings x:Key="AgentSettings" Tags="Agent1" /> <x:Boolean x:Key="DisableTests">True</x:Boolean> <x:String x:Key="ReleaseRepositorySolution">ERP</x:String> <x:Int32 x:Key="Major">2</x:Int32> <x:Int32 x:Key="Minor">3</x:Int32> </Dictionary> Here we can see that it is really only the non-default values that are persisted into the databasen. So, the problem in my case was that I removed one of the parameteres from the build process template, but the parameter and its value still existed in the build definition database. The solution to the problem is to refresh the build definition and save it. In the process tab, there is a Refresh button that will reload the build definition and the process template and synchronize them:   After refreshing the build definition and saving it, the build was running successfully again.

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >