Search Results

Search found 18096 results on 724 pages for 'let me be'.

Page 290/724 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • Is there a way to change the default sound volume on startup in windows?

    - by Logan Dam
    I've got a Creative X-Fi Titanium running on Windows 8, which works great, but the drivers have this weird quirk where it sets my headphones volume at 30% every time I boot if I have fast boot enabled. If I disable fast boot then it remembers my previous volume but I don't want to disable fast boot any more (I have an SSD, I want to use it :P) I've asked a similar question here before but as you can see the only "solution" was to disable fast boot, which I don't want to do anymore. Is there a command line tool that will let me set my volume or something similar that I can chuck in a batch file and run on startup, or anything else similar?

    Read the article

  • Configure IIS 7 Reverse Proxy to connect to TeamCity Tomcat

    - by Cynicszm
    We have an IIS 7 webserver configured and would like to create a reverse proxy for a TeamCity installation using Tomcat on the same machine. The IIS server site is https://somesite and I would like the TeamCity to appear as https://somesite/teamcity redirecting to http://localhost:portnumber I have installed the IIS URL Rewrite extension from http://www.iis.net/download/URLRewrite and the Application Request Routing from http://www.iis.net/download/ApplicationRequestRouting to try and setup a reverse proxy but can't get it working. The closest answer I found is an old StackOverflow question http://stackoverflow.com/questions/331755/how-do-i-setup-teamcity-for-public-access-over-https which unfortunately doesn't have a working example. I've searched a quite a bit but can't seem to find a relevant example. Any help appreciated (apologies for the bold but the spam prevention won't let me post more than 1 hyperlink)

    Read the article

  • Experience vs. versatility

    - by Florin Bombeanu
    Let's say a .NET programmer works at a company which provides software on demand, not as a product. The programmer works in WPF for a period of time and he/she invests lots of time in it. He/she get very good at WPF and Windows Forms and desktop development in general. But the company has to provide a web application now, so the developer has to learn MVC or Web Forms. He/she is not experienced in web development so he/she starts investing time in this new technology and in time they get good at it. But this time the company has to provide a Sharepoint solution, and so on. What is more important: Being very very good at a certain technology, Or be as versatile as possible knowing less in each technology but covering a greater area of expertise? Should the programmer keep studying and working in WPF until he/she reaches a guru level or is it a good thing that they had to learn other technologies as well? I agree with those of you who will say that when learning different technologies you will also learn things which are useful no matter the technology you're programming in. But eventually, when the programmer will want to change jobs, will it matter more that he/she knows some WPF, MVC or Sharepoint than the fact that he/she is insanely good at one of them? I would think the second one is more important since most companies are looking for a developer for a certain technology. I don't think there are many companies looking for technical know-it-all people. What do you think?

    Read the article

  • How to integrate Windows Server 2008 R2's NPS with Cisco switches?

    - by Massimo
    I need to evaluate in a lab environment the use of Windows Server 2008 R2's NPS for 802.1x authentication with Cisco Catalyst 3750 switches; the general idea is to only let clients connect to the company network if they can provide valid domain logon credentials, placing them in a restricted VLAN instead if they can't. NAP would also be a bonus, but it can be evaluated later; the main point now is only 802.1x authentication. Although I have very good knowledge of Windows and Active Directory (on the Microsoft side) and quite good knowledge of Catalyst switches (on the Cisco side), I'm totally new to 802.1x; I'd really like some general guidelines and help here, and some sort of implementation guide would also be very useful.

    Read the article

  • Passing a file with multiple patterns to grep

    - by Michael Goldshteyn
    Let's say we have two files. match.txt: A file containing patterns to match: fed ghi tsr qpo data.txt: A file containing lines of text: abc fed ghi jkl mno pqr stu vwx zyx wvu tsr qpo Now, I want to issue a grep command that should return the first and third line from data.txt: abc fed ghi jkl zyx wvu tsr qpo ... because each of these two lines match one of the patterns in match.txt. I have tried: grep -F -f match.txt data.txt but that returns no results. grep info: GNU grep 2.6.3 (cygwin) OS info: Windows 2008 R2 Update: It seems, that grep is confused by the space in the search pattern lines, but with the -F flag, it should be treating each line in match.txt as an individual match pattern.

    Read the article

  • How to Configure a vm on the same machine to do remote desktop [closed]

    - by Varun K
    I want to achieve following: (Note I'd like to get this done first of all with Win7 as both host and vm OS) Install Windows 7/xp/Windows 8 VM on Windows 7/Windows 8 host machine Configure it so that I can connect to it via remote desktop. This is because I use a screen reader software and audio output directly from VMs is not highly responsive. My software has a feature that it can connect to its copy on the remote machine (during rdp session) and then start receiving the text description which it translates into audio on the client (host in this case) machine. I want to know: Which VM software can let me do this – VMWare/Ms Virtual PC or VirtualBox If it is possible with every VM software, could you give an example of how to do this with anyone of these 3? Specifically, I know how to install Windows on VM (on both VMWare/Virtual PC), but don't really know how to configure a network such that I can remote into that VM from host OS. Hope it clarifies what I'm trying to achieve.

    Read the article

  • Enabling support of EUS and Fusion Apps in OUD

    - by Sylvain Duloutre
    Since the 11gR2 release, OUD supports Enterprise User Security (EUS) for database authentication and also Fusion Apps. I'll plan to blog on that soon. Meanwhile, the R2 OUD graphical setup does not let you configure both EUS and FusionApps support at the same time. However, it can be done manually using the dsconfig command line. The simplest way to proceed is to select EUS from the setup tool, then manually add support for Fusion Apps using dsconfig using the commands below: - create a FA workflow element with eusWfe as next element: dsconfig create-workflow-element \           --set enabled:true \           --set next-workflow-element:Eus0 \           --type fa \           --element-name faWfe - modify the workflow so that it starts from your FA workflow element instead of Eus: dsconfig set-workflow-prop \           --workflow-name userRoot0 \           --set workflow-element:faWfe  Note: the configuration changes may slightly differ in case multiple databases/suffixes are configured on OUD.

    Read the article

  • How to recover bad encripted directory

    - by Fato Alessandro
    I had a problem while formatting Ubuntu. I tried to reinstall without formatting the home directory and with the same username. The home directory of the new installation was set to be encrypted. Then the installation went wrong because of the cd. So it really never started (stopped at coping stage). How ever Ubuntu did encrypted the home directory but probably the procedure went wrong. By now I installed Ubuntu in another partition, tried to mount with encrypted-recovery but the mounted directory in tmp wasn't the directory I had before. There were just strange directories with coded name. Strange fact is that the file system is not damaged: it continues to know how much data is actually stored in it. If I look with gparted or even nautilus I see 45 Gb of data present on the partition. This let me think that my data are not erased but maybe hidden. Moreover when I tried to mount the encrypted home directory with encrypted-recovery-personal it asked me the encryption secret. I insert nothing, just pressed enter, and the password was accepted. Is thre a method for removing my data? Maybe trying to rencrypt the directory? How could I get back to the previous documents. Thanks to everyone

    Read the article

  • xDebug on Zend Server CE under Windows XP

    - by Hippyjim
    I have Zend Server installed on my Windows XP development machine, installed when I was naive and didn't know that Eclipse was going to become so suck so badly for PHP development. I've made the upgrade to Netbeans, but for debugging they only support xDebug. To be fair I've never used "proper" debuggers before, but other folks have raved about them so I thought I'd give it a try. I followed some directions on the Zend forum about how to install xDebug on Zend server, disabling Zend Debugger in the process. The xDebug "custom installation instructions" wizard tells me that my PHP was compiled with an unsupported compiler (MS VC8), and won't let me download anything. I tried a couple of the other xDebug binaries, but they just refused to load. So I'm left without a debugger option. Does anyone know how I can change the compiler of the php version I have installed so I can use a debugger in Netbeans? or how else i can get xDebug to install on Zend Server?

    Read the article

  • [Dear Recruiter] I developed in Mo'Fusion

    - by refuctored
    Forward: Sometimes I really feel like technology recruiters have no experience or knowledge of the field they are recruting for.  A warning to those companies hiring technical recruiters -- ensure that the technical recruiters you hire to fill a position are actually technical.  Here's proof below, where I make up completely ridiculous technologies, but still have interest from the recruiter for an interview. Letter to me: Hello - Your name came up as a possible match for a long term contract Cold Fusion Developer role I have in Bothell, WA.  This role requires you to be onsite in Bothell, WA. This is  a tough role to fill so I was hoping you might have someone you can recommend? Unfortunately no telecommute. Thank you! Sincerly, Mindy Recruiter My response: Mindy -- Wow I'm super-excited that you took the time to contact me about this position!  Let me tell you, you won't be disappointed with my skill set! Firstly, I've been developing in ColdFusion since 1993 before it was owned by Adobe and it was operating under code name, "Hot-Jack".  Recently I started developing under the Domain-View-Driven-Domain-Model (DVDDM), integrating client-side CF on Moobuntu.  Not only do I have a boat load of ColdFusion EXP,  I also have a ton of experience in the open source communities lesser known derivative of CF, Mo'Fusion (MF).  I've also invested thousands of hours of my time learning esoteric programming languages. Look forward to working with you! George And her response: Hi George – just left you a message. Give me a call at your convenience.  The role does require someone to be onsite here.. are you able to relocate yourself? Mindy [Sigh]

    Read the article

  • The Endeca UI Design Pattern Library Returns

    - by Joe Lamantia
    I'm happy to announce that the Endeca UI Design Pattern Library - now titled the Endeca Discovery Pattern Library - is once again providing guidance and good practices on the design of discovery experiences.  Launched publicly in 2010 following several years of internal development and usage, the Endeca Pattern Library is a unique and valued source of industry-leading perspective on discovery - something I've come to appreciate directly through  fielding the consistent stream of inquiries about the library's status, and requests for its rapid return to public availability. Restoring the library as a public resource is only the first step!  For the next stage of the library's evolution, we plan to increase the scope of the guidance it offers beyond user interface design to the broader topic of discovery.  This could include patterns for architecture at the systems, user experience, and business levels; information and process models; analytical method and activity patterns for conducting discovery; and organizational and resource patterns for provisioning discovery capability in different settings.  We'd like guidance from the community on the kinds of patterns that are most valuable - so make sure to let us know. And we're also considering ways to increase the number of patterns the library offers, possibly by expanding the set of contributors and the authoring mechanisms. If you'd like to contribute, please get in touch. Here's the new address of the library: http://www.oracle.com/goto/EndecaDiscoveryPatterns And I should say 'Many thanks' to the UXDirect team and all the others within the Oracle family who helped - literally - keep the library alive, and restore it as a public resource.

    Read the article

  • Small-scale database options for .NET

    - by raney
    I have a .NET 4.0/WPF based application I've developed and maintain for my company that acts as a friendly GUI central-point-of-information, combining information pulled from a couple of SQL databases, as well as CSV exports from a few other applications. I would like to build out my own database to support the entirety of the information that the application accesses, so that I could have a service running on my server that would read in necessary remote SQL info and file exports, to provide the user's application with a single database to connect to, as well as to remove all of the file handling currently involved in the program (copying new CSV resources from network location, reading them into memory each launch.) I have complete control and flexibility here as long as the user's experience isn't affected, and this is as much a learning experience as it is tidying up. Caveat being, I don't have much in the way of a budget. Right now I recognize my options to be: SQL Express - I'm comfortable with the server setup, I like ADO.NET and LINQ to SQL. I feel that I have the least to learn here, but it would let me focus on SQL in a familiar environment. Perhaps in conjunction with Entity Framework? MongoDB - I don't know a whole lot about, but I've heard the name enough to make me curious. Brief research seems friendly enough, and there is .NET support. I like working with open source projects. My questions are: What's popular and extensible right now? I'm not far from starting to job-hunt, and I'd like this project to be relevant going forward. What am I missing? Pros, cons? Other options? What plays well with .NET? What are the things I should be considering, the questions I should be asking, when making a decision like this? Thanks for your time.

    Read the article

  • Youtube video streaming slow

    - by Newbie
    When I try to stream youtube videos on my ubuntu 11.04, they don't stream smoothly. They buffer well and are choppy. Here's my Config: Laptop: Gateway NV58 Ram : 4 GB Ethernet Controller: Broadcom Corp NetLink BCM5784M Gigabit Ethernet PCIe (rev10) Let me know if you need more details. Output of lspci: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 02:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) 04:00.0 Network controller: Intel Corporation WiFi Link 5100

    Read the article

  • What happens to remounted data/directories

    - by cauon
    According to suggestions in this post I am trying to improve my system to run better with a Solid State Drive. But regarding to RAMdisks and /etc/fstab usage I have some understanding problems coming up. So let's say I add the following lines to /etc/fstab tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/spool tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/log tmpfs defaults,noatime,nodiratime,mode=0755 0 0 I know that on startup these locations should now get mounted into the RAM (hopefully). But what happens to the physical space that was mounted on those places before? Is it gone? Will it be back when I edit my /etc/fstab back to the Version without tmpfs? Will the space still be allocated on my SSD in a way that I can't use it for any other data? Sometimes it is suggested to add the following line, too: none /var/cache aufs dirs=/tmp:/var/cache=ro 0 0 What does this actually do? I noticed that /var/cache takes almost 1GB of space on my harddisk. So should i clear the directory before activating this line? (this is related to the former question) This causes me some confusions and I hope you can give me some clarifications. UPDATE I've downloaded a image with 600MB in size into /tmp that is mounted with the tmpfs settings above. Now I wanted to compare the RAM usage before and after the download. I expect the RAM usage to be increased by 600MB after the download. But the System Monitoring Tool showed me no changes at all. How can this be? Does tmpfs work other than I actually expect it to?

    Read the article

  • Unable to record sound from web browser (firefox / chromium ) using recordmydesktop

    - by thamurath
    I have to do some screencast tutorials and i am using recordmydesktop with gtk frontend to do it. I need to record also the sound and here is where i have found the problem. It took me some time, but now I can record the sound from almost every application in my desktop ... almost. I need to capture some sound from a web application using java, but when i load the page nothing appears in the playback tab of pavucontrol. I think this is the problem, because if there is no sound stream i think the recordmydesktop program thinks there is no sound to record ... the funny thing is that I can ear the sound in my speakers! I have tried with Firefox and Chromium with no success. Although I have been able to record youtube videos without problem, so it seems that java is the key here. Any suggestion or idea? P.S.: I am using Ubuntu 11.10 with this configuration. ( if more information is needed please let me know) sight i cannot post images ... so I have an audigy2 sound card using Analog Stereo Output profile. I have also an "Internal Audio" device, but i have it with the "Off" profile. In recordmydesktop-Advanced-Sound: Device = default

    Read the article

  • How to keep your third party libraries up to date?

    - by Joonas Pulakka
    Let's say that I have a project that depends on 10 libraries, and within my project's trunk I'm free to use any versions of those libraries. So I start with the most recent versions. Then, each of those libraries gets an update once a month (on average). Now, keeping my trunk completely up to date would require updating a library reference every three days. This is obviously too much. Even though usually version 1.2.3 is a drop-in replacement for version 1.2.2, you never know without testing. Unit tests aren't enough; if it's a DB / file engine, you have to ensure that it works properly with files that were created with older versions, and maybe vice versa. If it has something to do with GUI, you have to visually inspect everything. And so on. How do you handle this? Some possible approaches: If it ain't broke, don't fix it. Stay with your current version of the library as long as you don't notice anything wrong with it when used in your application, no matter how often the library vendor publishes updates. Small incremental changes are just waste. Update frequently in order to keep change small. Since you'll have to update some day in any case, it's better to update often so that you notice any problems early when they're easy to fix, instead of jumping over several versions and letting potential problems to accumulate. Something in between. Is there a sweet spot?

    Read the article

  • Preffered lambda syntax?

    - by Roger Alsing
    I'm playing around a bit with my own C like DSL grammar and would like some oppinions. I've reserved the use of "(...)" for invocations. eg: foo(1,2); My grammar supports "trailing closures" , pretty much like Ruby's blocks that can be passed as the last argument of an invocation. Currently my grammar support trailing closures like this: foo(1,2) { //parameterless closure passed as the last argument to foo } or foo(1,2) [x] { //closure with one argument (x) passed as the last argument to foo print (x); } The reason why I use [args] instead of (args) is that (args) is ambigious: foo(1,2) (x) { } There is no way in this case to tell if foo expects 3 arguments (int,int,closure(x)) or if foo expects 2 arguments and returns a closure with one argument(int,int) - closure(x) So thats pretty much the reason why I use [] as for now. I could change this to something like: foo(1,2) : (x) { } or foo(1,2) (x) -> { } So the actual question is, what do you think looks best? [...] is somewhat wrist unfriendly. let x = [a,b] { } Ideas?

    Read the article

  • Anyone tried dd'ing Raidmembers?

    - by DusteD
    I want replace all disks in a 10 disk raid6 (linux software raid). I could do this by pulling a disk, let the array rebuild, rinse, repeat. But this would take a very long time, and cause 10 rebuilds, which would most likely stress all 10 disks much more than simply reading each disk through once. My question is thus: Could I just shut down the array, and dd each old disk to a new disk and then start the array with the 10 new disks? In an ideal world, I would build another server and just copy the data via network, but this is not an ideal world.

    Read the article

  • Moving from Analogue PBX to digital VoIP?

    - by saint
    I don't even know if this belongs here?. If not, do let me know. So we have an analogue Alkatel PABX system in our little office. We have extensions, direct lines and PBX lines. We are trying to move to a more digital/flexible way of handling the phones and I've heard good things about FreeSwitch. I have zero knowledge about it. My biggest question is how would one handle existing phone lines with such a system. Surely there must be a way to make and receive calls from outside. Just a help in the right direction would be fine. Thanks.

    Read the article

  • SQL SERVER – Tell me What You Want to Listen – My 2 TechED 2011 Sessions

    - by pinaldave
    I am going to present two sessions at TechEd India on March 25th, 2011. I would like to know what do you want me to cover in this session. Watch the video taken by my wife when I was preparing for the session. Sessions Date: March 25, 2011 Understanding SQL Server Behavioral Pattern – SQL Server Extended Events Date and Time: March 25, 2011 12:00 PM to 01:00 PM SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Date and Time: March 25, 2011 04:15 PM to 05:15 PM I promise following for both of my sessions: I will share the scripts demonstrated in the session right at the end of the sessions The sessions will be 300-400 level but I promise to make the concept very simple Less slides and lots of meaningful Demos Session close to real life cases and scenarios Surprise gifts to best participants I promise to answer all the questions either in session or right after the hall after the session Lots of Technical Education and FUN! Please leave your comments with your expectation and if you are going to attend the session do let me know here. We will for sure meet at the event and do some interesting talk. You can read the abstract of the session over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • 2D/Isometric map algorithm

    - by Icarus Cocksson
    First of all, I don't have much experience on game development but I do have experience on development. I do know how to make a map, but I don't know if my solution is a normal or a hacky solution. I don't want to waste my time coding things, and realise they're utterly crap and lose my motivation. Let's imagine the following map. (2D - top view - A square) X: 0 to 500 Y: 0 to 500 My character currently stands at X:250,Y:400, somewhere near center of 100px above bottom and I can control him with my keyboard buttons. LEFT button does X--, UP button does Y-- etc. This one is kid's play. I'm asking this because I know there are some engines that automate this task. For example, games like Diablo 3 uses an engine. You can pretty much drag drop a rock to map, and it is automatically being placed here - making player unable to pass through/detect the collision. But what the engine exactly does in the background? Generates a map like mine, places a rock at the center, and checks it like: unmovableObjects = array('50,50'); //we placed a rock at 50,50 location if(Map.hasUnmovableObject(CurrentPlayerX, CurrentPlayerY)) { //unable to move } else { //able to move } My question is: Is this how 2D/Isometric maps are being generated or there is a different and more complex logic behind them?

    Read the article

  • Disable Offline Files (mobsync.exe) on Windows 7 Home

    - by Synetech
    This morning I was watching the CPU graph of a Windows 7 Home laptop and noticed that every few seconds, the CPU would spike several percent. I watched the processes and determined that it was mobsync.exe (Offline Files) that was the culprit. I tried the usual steps that Googling turns up, and clicking the Manage Offline Files link to bring up the Offline Files dialog to click Disable Synch does not work because the dialog will not display. This makes sense since everything I have read indicates that Offline Files is not even included/supported in the Home version, so I am at a loss as to why it is running at all, let alone why it is sucking up CPU cycles. (My best guess is that it was started when they pressed Win+X to access the Mobility Center.) Of course I can just kill mobsync, but it could always just come back. How/why would mobsync be running on a Home version and how can it be disabled (of course the Group Policy editor is not available on a Home version).

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >