Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 257/785 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Having problems building Cairo on x64 CentOS

    - by Vnuk
    I've done this many times on 32 bit CentOS and everything went ok without a hitch. But now, on x64 CentOS, I can't get cairo to find pixman. Pixman 0.18.0 is installed in /usr/local/lib (which I believe is the usual location). Configure for Cairo 1.8.10 can't find it: checking for cairo's image surface backend feature... checking for pixman... no no checking whether cairo's image surface backend feature could be enabled... no (requires pixman-1 >= 0.12.0 http://cairographics.org/releases/) configure: error: mandatory image surface backend feature could not be enabled I've tried setting environment variable pixman_LIBS=/usr/local/lib but wihtout any luck. Any idea what is going wrong? Is it possible for me to see where is cairo's configure looking for pixman? Search paths or something like that?

    Read the article

  • Team Foundation Server Setup/Access

    - by Angel Brighteyes
    What I need: A TFS 2010 Setup that allows 2 application developers to access the TFS from remote locations. How it is setup: Server 2008 Standard 2g Ram 300g HD space SharePoint Server 2007, using SQL Server 2005 SQL Server 2008 Standard Team Foundation Server 2010 IIS 7 Sharepoint Bindings: TFS.DynAccount.Me:80; TFS:80 TFS Bindings: TFS.DynAccount.Me:8080; TFS:8080 Using DynDNS service to account for the dynamic ip address being used, this is a requirement for the moment until I can get a better isp package. Access using Local Accounts Server is not setup on a domain, or as a domain. Consequently I did not setup AD services. Problem: When logged into TFS using my credentials TFS\AdminUser through the DynDNS account TFS.DynAccount.Me I recieve the 'Red X of Death' on the Documents and Reports folder. When logged into the TFS through the local peer to peer network using the same credentials TFS\AdminUser I do not receive the 'Red X of Death' problem. Further Troubleshooting: When users 'Right Click' the 'TeamProject1' Click 'Show Project Portal' it tries to take them to http://TFS:8080 instead of http://TFS.DynAccount.Me:8080, which doing further research I am assuming that it is because team foundation server was setup with a local name of TFS instead of 'TFS.DynAccount.Me' as specified here in Visual Studio Magazines: The Red X of Death. Users can Access the Team Portal for SharePoint via http://TFS.DynAccount.Me/TeamCollection/TeamProject so it is not like we are dead in the water or anything. However, as most employees/staff are prone to do, they have expressed a great distaste for having to do it this way and just be patient until the current project is finished since we are under a very strict deadline. Is there a way to set this up differently, or change some settings someplace, reinstall it, point a CName record for our domain website to the DynAccount (e.g. TFS.OurDomain.com points to TFS.DynAccount.Me, which consequently does allow access to the http site without issues), or something. I really don't feel like after all the time and effort I have spent into, first the cost, second the bloody install, third learning SharePoint well enough, fourth the hours into days spent on this, fifth more troubleshooting, sixth employee headaches to just let it lay where it is at. I figure in my spare/off time I would keep trying to get this to work. So I really appreciate any help any one can give me. I know this is probably something really stupid simple that I will 'Face Palm' over, but at the moment the stress and frustration just has me beat. Thank you again, this community has always been a great help.

    Read the article

  • Career development as a Software Developer without becoming a manager.

    - by albertpascual
    I’m a developer, I like to write new exciting code everyday, my perfect day at work is a day that when I wake up, I know that I have to write some code that I haven’t done before or to use a new framework/language/platform that is unknown to me. The best days in the office is when a project is waiting for me to architect or write. In my 15 years in the development field, I had to in order to get a better salary to manage people, not just to lead developers, to actually manage people. Something that I found out when I get into a management position is that I’m not that good at managing people, and not afraid to say it. I do not enjoy that part of the job, the worse one, takes time away from what I really like. Leading developers and managing people are very different things. I do like teaching and leading developers in a project. Yet most people believe, and is true in most companies, the way to get a better salary is to be promoted to a manager position. In order to advance in your career you need to let go of the everyday writing code and become a supervisor or manager. This is the path for developers after they become senior developers. As you get older and your family grows, the only way to hit your salary requirements is to advance your career to become a manager and get that manager salary. That path is the common in most companies, the most intelligent companies out there, have learned that promoting good developers mean getting a crappy manager and losing a good resource. Now scratch everything I said, because as I previously stated, I don’t see myself going to the office everyday and just managing people until is time to go home. I like to spend hours working in some code to accomplish a task, learning new platforms and languages or patterns to existing languages. Being interrupted every 15 minutes by emails or people stopping by my office to resolve their problems, is not something I could enjoy. All the sudden riding my motorcycle to work one cold morning over the Redlands Canyon and listening to .NET Rocks podcast, Michael “Doc” Norton explaining how to take control of your development career without necessary going to the manager’s track. I know, I should not have headphones under my helmet when riding a motorcycle in California. His conversation with Carl Franklin and Richard Campbell was just confirming everything I have ever did with actually more details and assuring that there are other paths. His method was simple yet most of us, already do many of those steps, Mr. Michael “Doc” Norton believe that it pays off on the long run, that finally companies prefer to pay higher salaries to those developers, yet I would actually think that many companies do not see developers that way, this is not true for bigger companies. However I do believe the value of those developers increase and most of the time, changing companies could increase their salary instead of staying in the same one. In short without even trying to get into the shadow of Mr. Norton and without following the steps in the order; you should love to learn new technologies, and then teach them to other geeks. I personally have learn many technologies and I haven’t stop doing that, I am a professor at UCR where I teach ASP.NET and Silverlight. Mr Norton continues that after than, you want to be involve in the development community, user groups, online forums, open source projects. I personally talk to user groups, I’m very active in forums asking and answering questions as well as for those I got awarded the Microsoft MVP for ASP.NET. After you accomplish all those, you should also expose yourself for what you know and what you do not know, learning a new language will make you humble again as well as extremely happy. There is no better feeling that learning a new language or pattern in your daily job. If you love your job everyday and what you do, I really recommend you to follow Michael’s presentation that he kindly share it on the link below. His confirmation is a refreshing, knowing that my future is not behind a desk where the computer screen is on my right hand side instead of in front of me. Where I don’t have to spent the days filling up performance forms for people and the new platforms that I haven’t been using yet are just at my fingertips. Presentation here. http://www.slideshare.net/LeanDog/take-control-of-your-development-career-michael-doc-norton?from=share_email_logout3 Take Control of Your Development Career Welcome! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] Recovering Post Technical I love to learn I love to teach I love to work in teams I love to write code I really love to write code What about YOU? Do you love your job? Do you love your Employer? Do you love your Boss? What do you love? What do you really love? Take Control Take Control • Get Noticed • Get Together • Get Your Mojo • Get Naked • Get Schooled Get Noticed Get Noticed Know Your Business Get Noticed Get Noticed Understand Management Get Noticed Get Noticed Do Your Existing Job Get Noticed Get Noticed Make Yourself Expendable Get Together Get Together Join a User Group Get Together Help Run a User Group Get Together Start a User Group Get Your Mojo Get Your Mojo Kata Get Your Mojo Koans Get Your Mojo Breakable Toys Get Your Mojo Open Source Get Naked Get Naked Run with Group A Get Naked Do Something Different Get Naked Own Your Mistakes Get Naked Admit You Don’t Know Get Schooled Get Schooled Choose a Mentor Get Schooled Attend Conferences Get Schooled Teach a New Subject Get Started Read These (Again) Take Control of Your Development Career Thank You! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] In a short summary, I recommend any developer to check his blog and more important his presentation, I haven’t been lucky enough to watch him live, I’m looking forward the day I have the opportunity. He is giving us hope in the future of developers, when I see some of my geek friends moving to position that in short years they begin to regret, I get more unsure of my future doing what I love. I would say that now is looking at the spectrum of companies that understand and appreciate developers. There are a few there, hopefully with time code sweat shops will start disappearing and being a developer will feed a family of 4. Cheers Al tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/07/career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • 10013 error (AccessDenied) on Silverlight Socet application

    - by Samvel Siradeghyan
    I am writing silverlight 3 application which is working on network. It works like client-server application. There is WinForm application for server and silverlight application for client. I use TcpListener on server and connect from client to it with Socket. In local network it works fine, but when I try to use it from internet it don't connect to server. I use IP address on local network and real IP with port number for internet version. I get error 10013 AccessDenied. Port number is correct and access policy exist. Firewall is turned of. Where is the problem? Thanks.

    Read the article

  • copy or move a file from one ftp server to another

    - by Oleg Pavliv
    I have a java application, which copies or moves a bunch of giga files from one ftp server to another. Currently it copies a file from the first fpt server to the local computer (where it runs) using ftp get and then copies it to the second ftp server using ftp put. I use net library from apache. Is it possible to copy it directly from one ftp server to another bypassing the local computer? One idea is to create a java telnet session and and send a couple of ftp commands. Will it work? Any other suggestions?

    Read the article

  • Problem with TcpListener on Silverlight application

    - by Samvel Siradeghyan
    I am writing silverlight 3 application wich si working on network. It works like client-server application. There is WinForm application for server and silverlight application for client. I use TcpListener on server and connect from client to it with Socet. In local network it works fine, but when I try to use it from internet it don't connect to server. I use IP address on local network and real IP with port number for internet version. Where is the problem? Thanks.

    Read the article

  • rmagick error on heroku

    - by nvano
    i'm cropping images with paperclip. i have a custom module which works great on my local machine (copied from railscast 182). //file: lib/paperclip_processors/cropper.rb module Paperclip class Cropper < Thumbnail def transformation_command if crop_command crop_command + super.sub(/ -crop \S+/, '') else super end end def crop_command target = @attachment.instance if target.cropping? " -crop '#{target.crop_w.to_i}x#{target.crop_h.to_i}+#{target.crop_x.to_i}+# {target.crop_y.to_i}'" end end end end on heroku i get the following error: NoMethodError (private method `sub' called for ["-resize", "220x", "-crop", "220x220+0+18", "+repage"]:Array): lib/paperclip_processors/cropper.rb:12:in `transformation_command' paperclip (2.3.3) lib/paperclip/thumbnail.rb:55:in `make' paperclip (2.3.3) lib/paperclip/processor.rb:33:in `make' paperclip (2.3.3) lib/paperclip/attachment.rb:295:in `post_process_styles' paperclip (2.3.3) lib/paperclip/attachment.rb:294:in `each' paperclip (2.3.3) lib/paperclip/attachment.rb:294:in `inject' paperclip (2.3.3) lib/paperclip/attachment.rb:294:in `post_process_styles' paperclip (2.3.3) lib/paperclip/attachment.rb:291:in `each' paperclip (2.3.3) lib/paperclip/attachment.rb:291:in `post_process_styles' paperclip (2.3.3) lib/paperclip/attachment.rb:285:in `post_process' paperclip (2.3.3) lib/paperclip/callback_compatability.rb:23:in `call' paperclip (2.3.3) lib/paperclip/callback_compatability.rb:23:in `run_paperclip_callbacks' paperclip (2.3.3) lib/paperclip/attachment.rb:284:in `post_process' paperclip (2.3.3) lib/paperclip/callback_compatability.rb:23:in `call' paperclip (2.3.3) lib/paperclip/callback_compatability.rb:23:in `run_paperclip_callbacks' paperclip (2.3.3) lib/paperclip/attachment.rb:283:in `post_process' paperclip (2.3.3) lib/paperclip/attachment.rb:214:in `reprocess!' app/models/user.rb:339:in `reprocess_avatar' app/controllers/user_controller.rb:57:in `update_avatar' haml (2.2.3) rails/./lib/sass/plugin/rails.rb:19:in `process' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.0.1) lib/thin/connection.rb:80:in `pre_process' thin (1.0.1) lib/thin/connection.rb:78:in `catch' thin (1.0.1) lib/thin/connection.rb:78:in `pre_process' thin (1.0.1) lib/thin/connection.rb:57:in `process' thin (1.0.1) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run_machine' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run' thin (1.0.1) lib/thin/backends/base.rb:57:in `start' thin (1.0.1) lib/thin/server.rb:150:in `start' thin (1.0.1) lib/thin/controllers/controller.rb:80:in `start' thin (1.0.1) lib/thin/runner.rb:173:in `send' thin (1.0.1) lib/thin/runner.rb:173:in `run_command' thin (1.0.1) lib/thin/runner.rb:139:in `run!' thin (1.0.1) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20

    Read the article

  • UIWebView NSURLRequestReloadIgnoringLocalCacheData doesn't actually ignore the cache

    - by dodeskjeggen
    I have a UIWebView object, with the caching-policy specified as: NSURLRequestReloadIgnoringLocalCacheData This should ignore whatever objects are in the local cache and retrieve the latest version of a site from the web. However, after the first load of the site (10 resources in trace, HTTP GET), all subsequent loads of the site only retrieve a small subset of resources (3 resources in trace, HTTP GET). The images all appear to be loaded from some local source. I have confirmed that my sharedURLCache has a memory usage of 0 bytes, and a disk usage of 0 bytes. Whenever the process starts fresh, the full version of the site is retrieved again. This leads me to believe that these resources are being stored in an in-memory cache, but as I noted before, [[NSURLCache sharedURLCache] currentMemoryUsage] returns 0. I have also tried explicitly removing the cached response for my request, but this seems to have no effect. What gives?

    Read the article

  • rspec, autotest and Rails 3 beta 2 can't find executable issue

    - by Toby Hede
    I am running Rails 3 Beta2 and attempting to get Autotest working with rspec. When I run autospec, I receive the following message: /usr/local/lib/ruby/site_ruby/1.8/rubygems.rb:334:in `bin_path': can't find executable autospec for rspec-2.0.0.beta.5 (Gem::Exception) from /usr/local/bin/autospec:19 I am using Ruby 1.9.1 with the following Gems: rails (3.0.0.beta2) railties (3.0.0.beta2) rspec (2.0.0.beta.5) rspec-core (2.0.0.beta.5) rspec-expectations (2.0.0.beta.5) rspec-mocks (2.0.0.beta.5) rspec-rails (2.0.0.beta.5) ZenTest (4.3.1) Any help would be greatly appreciated.

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Authorizing a computer to access a web application

    - by HackedByChinese
    I have a web application, and am tasked with adding secure sign-on to bolster security, akin to what Google has added to Google accounts. Use Case Essentially, when a user logs in, we want to detect if the user has previously authorized this computer. If the computer has not been authorized, the user is sent a one-time password (via email, SMS, or phone call) that they must enter, where the user may choose to remember this computer. In the web application, we will track authorized devices, allowing users to see when/where they logged in from that device last, and deauthorize any devices if they so choose. We require a solution that is very light touch (meaning, requiring no client-side software installation), and works with Safari, Chrome, Firefox, and IE 7+ (unfortunately). We will offer x509 security, which provides adequate security, but we still need a solution for customers that can't or won't use x509. My intention is to store authorization information using cookies (or, potentially, using local storage, degrading to flash cookies, and then normal cookies). At First Blush Track two separate values (local data or cookies): a hash representing a secure sign-on token, as well as a device token. Both values are driven (and recorded) by the web application, and dictated to the client. The SSO token is dependent on the device as well as a sequence number. This effectively allows devices to be deauthorized (all SSO tokens become invalid) and mitigates replay (not effectively, though, which is why I'm asking this question) through the use of a sequence number, and uses a nonce. Problem With this solution, it's possible for someone to just copy the SSO and device tokens and use in another request. While the sequence number will help me detect such an abuse and thus deauthorize the device, the detection and response can only happen after the valid device and malicious request both attempt access, which is ample time for damage to be done. I feel like using HMAC would be better. Track the device, the sequence, create a nonce, timestamp, and hash with a private key, then send the hash plus those values as plain text. Server does the same (in addition to validating the device and sequence) and compares. That seems much easier, and much more reliable.... assuming we can securely negotiate, exchange, and store private keys. Question So then, how can I securely negotiate a private key for authorized device, and then securely store that key? Is it more possible, at least, if I settle for storing the private key using local storage or flash cookies and just say it's "good enough"? Or, is there something I can do to my original draft to mitigate the vulnerability I describe?

    Read the article

  • Java Classpath Problems in Ubuntu

    - by Travis
    First off I'm running Ubuntu 9.10 I've edited the /etc/environment file to look like this: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" JAVA_HOME="/usr/lib/jvm/java-6-sun-1.6.0.20" CLASSPATH="/home/travis/freetts/lib/freetts.jar:/home/travis/freetts/lib/jsapi.jar:." I then run "source /etc/environment" to make sure the changes are included. Then I try compiling my simple test program using this: javac Test.java It throws out a few errors, but when I compile like this: javac -cp /home/travis/freetts/lib/freetts.jar:/home/travis/freetts/lib/jsapi.jar:. Test.java It works just fine, this leads me to believe that for some reason javac isn't seeing the CLASSPATH environment variable? I can echo it and everything in the terminal: echo $CLASSPATH gives me what I put in. Any help on this would be greatly appreciated.

    Read the article

  • Wordpress Autologin Plugin not working on server

    - by Nishant
    Hello Everyone, I have been busy integrating Wordpress to one of a CakePHP application.Last Monday I cracked the way to integrate it.Now another problem I faced was that Client wanted to auto login the users who are login in CakePHP side,I did that too and it works fine in local.I am using the Session Variable of CakePHP which is set in core.php of cakephp,in the Wordpress also.The Code snippet of the Auto Login plugin is : - session_name("Cake_PHP_Session_Vars"); session_start(); function auto_login(){if (!is_user_logged_in()) { //determine WordPress user account to impersonate $user_login = 'guest'; //get user's ID $sessVars = $_SESSION['User']; $user_id = $sessVars['id']; //login wp_set_current_user($user_id, $user_login); wp_set_auth_cookie($user_id); do_action('wp_login', $user_login); }}add_action('init', 'auto_login'); It all works fine on the Local system but when I am putting it on Server,It is not working out.Please suggest me what could be the problem here. Thanks In Advance

    Read the article

  • Programmatically retrieve disconnected network adapter information in .NET

    - by Soo Wei Tan
    I have an application written in C# that needs to retrieve information like IP address, subnet mask from a disconnected network adapter. I've tried using various methods such as WMI and the .NET NetworkAdapter class but they don't return any useful data when the network adapter is disconnected. I'm pretty sure Windows keeps this information somewhere, since I can apply network settings using netsh and it appears correctly in the Control Panel. One thing that worked for me in XP was to parse the output of the netsh tool and it would return information even for a disconnected adapter. However, this doesn't seem to work on Windows 7. Win XP output: Configuration for interface "Local Area Connection 5" DHCP enabled: No IP Address: 169.254.0.128 SubnetMask: 255.255.255.0 InterfaceMetric: 0 Win7 output: Configuration for interface "Local Area Connection 2" DHCP enabled: No InterfaceMetric: 5 Any ideas?

    Read the article

  • Nginx A/B testing

    - by Alex
    Hey, I'm trying to do A/B testing and I'm using Nginx fo this purpose. My Nginx config file looks like this: events { worker_connections 1024; } error_log /usr/local/experiments/apps/reddit_test/error.log notice; http { rewrite_log on; server { listen 8081; access_log /usr/local/experiments/apps/reddit_test/access.log combined; location / { if ($remote_addr ~ "[02468]$") { rewrite ^(.+)$ /experiment$1 last; } rewrite ^(.+)$ /main$1 last; } location /main { internal; proxy_pass http://www.reddit.com/r/lisp; } location /experiment { internal; proxy_pass http://www.reddit.com/r/haskell; } } } This is kind of working, but css and js files woon't load. Can anyone tell me what's wrong with this config file or what would be the right way to do it? Thanks, Alex

    Read the article

  • Jason/ajax web service call get redirected (302 Object moved) and then 500 unknow webservice name

    - by user646499
    I have been struggling with this for some times.. I searched around but didnt get a solution yet. This is only an issue on production server. In my development environment, everything works just fine. I am using JQuery/Ajax to update product information based on product's Color attribute. for example, a product has A and B color, the price for color A is different from color B. When user choose different color, the price information is updated as well. What I did is to first add a javascript function: function updateProductVariant() { var myval = jQuery("#<%=colorDropDownList.ClientID%").val(); jQuery.ajax({ type: "POST", url: "/Products.aspx/GetVariantInfoById", data: "{variantId:'" + myval + "'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { var obj = jQuery.parseJSON(response.d); jQuery("#<%=lblStockAvailablity.ClientID%>").text(obj.StockMessage); jQuery(".price .productPrice").text(obj.CurrentPrice); jQuery(".price .oldProductPrice").text(obj.OldPrice); } }); } Then I can register the dropdown list's 'onclick' event point to function 'updateProductVariant' GetVariantInfoById is a WebGet method in the codebehind, its also very straightforward: [WebMethod] public static string GetVariantInfoById(string variantId) { int id = Convert.ToInt32(variantId); ProductVariant productVariant = IoC.Resolve().GetProductVariantById(id); string stockMessage = productVariant.FormatStockMessage(); StringBuilder oBuilder = new StringBuilder(); oBuilder.Append("{"); oBuilder.AppendFormat(@"""{0}"":""{1}"",", "StockMessage", stockMessage); oBuilder.AppendFormat(@"""{0}"":""{1}"",", "OldPrice", PriceHelper.FormatPrice(productVariant.OldPrice)); oBuilder.AppendFormat(@"""{0}"":""{1}""", "CurrentPrice", PriceHelper.FormatPrice(productVariant.Price)); oBuilder.Append("}"); return oBuilder.ToString(); } All these works fine on my local box, but if i upload to the production server, catching the traffic using fiddler, /Products.aspx/GetVariantInfoById becomes a Get call instead of a POST On my local box, HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 03 Mar 2011 09:00:08 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private, max-age=0 Content-Type: application/json; charset=utf-8 Content-Length: 117 Connection: Close But when deployed on the host, it becomes HTTP/1.1 302 Found Proxy-Connection: Keep-Alive Connection: Keep-Alive Content-Length: 186 Via: 1.1 ADV-TMG-01 Date: Thu, 03 Mar 2011 08:59:12 GMT Location: http://www.pocomaru.com/Products.aspx/GetVariantInfoById/default.aspx Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET then of course, then it returns 500 error titleUnknown web method GetVariantInfoById/default.aspx.Parameter name: methodName Can someone please take a look? I think its some configuration in the production server which automatially redirects my webservice call to some other URL, but since the product server is out of my control, i am seeking a local fix for this. Thanks for the help!

    Read the article

  • Accessing TFS from Powershell

    - by w4ymo
    Hello I am new to PowerShell and I am trying to get branches from TFS and merge them using a PowerShell script. Unfortunately I am failing a first hurdle. I do have Visual Studio 2010 install on my local machine and can access the TFS server (also 2010) fine. I am running the script from my local machine and have the following lines: $tfs = get-tfs http://TFSServerName:8080/TFSProject $branchfolders = $tfs.VCS.GetItems('$/Dev/Branches/', $tfs.RecursionType::OneLevel) and I receive the following error on the second line 2 above Exception calling "GetItems" with "2" argument(s): "Unable to connect to the remote server" I have configured the TFS server to accept incoming connections on port 8080 which works but I am now not to sure how to resolve this error. Is further configuration required? Thanks for any help given.

    Read the article

  • SocketException preventing use of C# TCPListener in Windows Service

    - by JoeGeeky
    I have a Windows Service that does the following when started. When running via a Console application it works fine, but once I put in a Windows Service I get the below exception. Here is what I have tried so far: Disabled the firewall, also tried adding explicit exclusions for the exe, port, and protocol Checked CAS Policy Config, shows unrestricted rights Configured the Service to run as an Administrator Account, Local System, Local Service, and Network Service, each with the same result Tried different ports Also tried 127.0.0.1 just to see... same issue This is wrecking my head, so any help would be greatly appreciated: The Code: var _listener = new TcpListener(endpoint); //192.168.2.2:20000 _listener.Start(); The resulting Exception: Service cannot be started. System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Bind(EndPoint localEP) at System.Net.Sockets.TcpListener.Start(Int32 backlog) at System.Net.Sockets.TcpListener.Start() at Server.RequestHandler.StartServicingRequests(IPEndPoint endpoint) at Server.Server.StartServer(String[] args) at Server.Server.OnStart(String[] args) at System.ServiceProcess.ServiceBase.ServiceQueuedMainCallback(Object state)

    Read the article

  • luntbuild + maven + findbugs = OutOfMemoryException

    - by Johannes
    Hi, I've been trying to get Luntbuild to generate and publish a project site for our project including a Findbugs report. All other reports (Cobertura, Surefire, JavaDoc, Dashboard) work fine, but Findbugs bails out with an OutOfMemoryException. Excluding findbugs from report generation fixes the build --- although obviously without a Findbugs report. The funny thing is that I first encountered this problem locally and solved it by setting MAVEN_OPTS=-Xmx512m. This does not seem to be enough in Luntbuild, however: setting that exact same option as an environment variable of my builder doesn't make a difference. I've found a couple of posts on the 'net stating you should also add -XX:MaxPermSize=512m to MAVEN_OPTS and/or pass -Dmaven.findbugs.jvmargs=-Xmx512m to mvn.bat. None of these (or their combination) seem to help though so any hints would be greatly appreciated! Cheers, Johannes Relevant information: Luntbuild is 1.5.6, Maven is 2.1.0, findbugs-maven-plugin is 2.0.1. This is the Findbugs section of the relevant pom.xml: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId> <version>2.0.1</version> </plugin> This is the head of my build log: User "luntbuild" started the build Perform checkout operation for VCS setting: Vcs name: Subversion Repository url base: http://some.repository.com/repo/ Repository layout: multiple Directory for trunk: trunk Directory for branches: branches Directory for tags: tags Username: xxxx Password:xxxx Web interface: ViewVC URL to web interface: http://some.repository.com/repo/ Quiet period: modules: Source path: somepath, Branch: , Label: , Destination path: somewhere Source path: somepath, Branch: somewhere1.0.x, Label: , Destination path: somewhere-1.0.x Source path: somepath, Branch: somewhere1.1.x, Label: , Destination path: somewhere-1.1.x Update url: http://some.repository.com/repo//trunk Duration of the checkout operation: 0 minutes Perform build with builder setting: Builder name: default Builder type: Maven2 builder Command to run Maven2: "C:\maven\apache-maven-2.1.0\bin\mvn.bat" -e -f somewhere\pom.xml -P site -Dmaven.test.skip=false -DbuildDate="Tue Nov 24 11:13:24 CET 2009" -DbuildVersion="site-core138" -Dsvn.username=xxxx -Dsvn.password=xxxx -DstagingSiteURL=file:///C:/luntbuild/core-reports -Dmaven.findbugs.jvmargs=-Xmx512m Directory to run Maven2 in: Goals to build: site:stage site:stage-deploy Build properties: buildVersion="site-core138" artifactsDir="C:\\Program Files\\Luntbuild\\publish\\somewhere\\site-core\\site-core138\\artifacts" buildDate="Tue Nov 24 11:13:24 CET 2009" junitHtmlReportDir="" Environment variables: MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=512m" Build success condition: result==0 and builderLogContainsLine("INFO","BUILD SUCCESSFUL") Execute command: Executing 'C:\maven\apache-maven-2.1.0\bin\mvn.bat' with arguments: '-e' '-f' 'somewhere\pom.xml' '-P' 'site' '-Dmaven.test.skip=false' '-DbuildDate=Tue Nov 24 11:13:24 CET 2009' '-DbuildVersion=site-core138' '-Dsvn.username=xxxxxx' '-Dsvn.password=xxxxxx' '-DstagingSiteURL=file:///C:/luntbuild/reports' '-Dmaven.findbugs.jvmargs=-Xmx512m' '-DbuildVersion=site-core138' '-DartifactsDir=C:\\Program Files\\Luntbuild\\publish\\somewhere\\site-core\\site-core138\\artifacts' '-DbuildDate=Tue Nov 24 11:13:24 CET 2009' '-X' 'site:stage' 'site:stage-deploy' Tail of my build log: Analyzed: C:\luntbuild\somewhere-work\somewhere\...\SomeClass.class ... Analyzed: C:\luntbuild\somewhere-work\somewhere\...\target\classes Aux: C:\luntbuild\somewhere-work\somewhere\...\target\classes Aux: c:\maven\local-repo\...\somejar-1.1.1.1-SNAPSHOT.jar Aux: c:\maven\local-repo\commons-lang\commons-lang\2.3\commons-lang-2.3.jar .... Aux: c:\maven\local-repo\org\openoffice\ridl\3.1.0\ridl-3.1.0.jar Aux: c:\maven\local-repo\org\openoffice\unoil\3.1.0\unoil-3.1.0.jar [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Java heap space [INFO] ------------------------------------------------------------------------ [DEBUG] Trace java.lang.OutOfMemoryError: Java heap space at java.util.HashMap.(HashMap.java:209) at edu.umd.cs.findbugs.ba.type.TypeAnalysis$CachedExceptionSet.(TypeAnalysis.java:114) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.getCachedExceptionSet(TypeAnalysis.java:688) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.computeThrownExceptionTypes(TypeAnalysis.java:439) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.transfer(TypeAnalysis.java:411) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.transfer(TypeAnalysis.java:89) at edu.umd.cs.findbugs.ba.Dataflow.execute(Dataflow.java:356) at edu.umd.cs.findbugs.classfile.engine.bcel.TypeDataflowFactory.analyze(TypeDataflowFactory.java:82) at edu.umd.cs.findbugs.classfile.engine.bcel.TypeDataflowFactory.analyze(TypeDataflowFactory.java:44) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:331) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:281) at edu.umd.cs.findbugs.classfile.engine.bcel.CFGFactory.analyze(CFGFactory.java:173) at edu.umd.cs.findbugs.classfile.engine.bcel.CFGFactory.analyze(CFGFactory.java:64) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:331) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:281) at edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysis(ClassContext.java:937) at edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysisNoDataflowAnalysisException(ClassContext.java:921) at edu.umd.cs.findbugs.ba.ClassContext.getCFG(ClassContext.java:326) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.analyzeMethod(BuildUnconditionalParamDerefDatabase.java:103) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.considerMethod(BuildUnconditionalParamDerefDatabase.java:93) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.visitClassContext(BuildUnconditionalParamDerefDatabase.java:79) at edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68) at edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:971) at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:222) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:230) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:912) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:756) [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 minutes 16 seconds [INFO] Finished at: Tue Nov 24 11:31:23 CET 2009 [INFO] Final Memory: 70M/127M [INFO] ------------------------------------------------------------------------ Maven2 builder failed: build success condition not met! Note that apparently maven only uses 70MB... but that probably doesn't mean anything since the Findbugs plugin forks its own process.

    Read the article

  • How to set permissions on MSMQ Cluster queues?

    - by JorgeSandoval
    I've got a cluster with functioning private MSMQ 3.0 queues. I'm trying to programmatically set the permissions, but can't seem to connect via System.Messaging on the queues. The code below works just fine when working with local queues (and using .\ nomenclature for the local queue). How to programmatically set the permissions on the clustered queues? Powershell code executed from the active node function set-msmqpermission ([string] $queuepath,[string] $account, [string] $accessright) { if (!([System.Messaging.MessageQueue]::Exists($queuepath))){ throw "$queuepath could not be found." } $q=New-Object System.Messaging.MessageQueue($queuepath) $q.SetPermissions($account,[System.Messaging.MessageQueueAccessRights]::$accessright, [System.Messaging.AccessControlEntryType]::Set) } set-msmqpermission "clusternetworkname\private$\qa1ack" "UserAccount" "FullControl" Exception calling "SetPermissions" with "3" argument(s): "Invalid queue path name." At line:30 char:19 + $q.SetPermissions <<<< ($account,[System.Messaging.MessageQueueAccessRights]::$accessright, + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException

    Read the article

  • Timer or Thread?

    - by Pradeep Singh
    I have a method(say method1) that writes to database(sqlserver)and another method(say method2) that tries to access the same database after some time and updates the data row that was created by method1. The problem arises when method1 fails to access db due to the LAN being disconnected (this is not an exception this is a scenario that will definitely arise in my software, getting into details will make the question too complex) if method1 fails to access db method2 cannot work. What I want to do is to make method1 store values to local db instead of server if the LAN is disconnected and as soon as it enters value in local db the application should start trying to access the server after ever 10-15 seconds. What should I use timer or create a new thread?

    Read the article

  • Tailoring the Oracle Fusion Applications User Interface with Oracle Composer

    - by mvaughan
    By Killian Evers, Oracle Applications User Experience Changing the user interface (UI) is one of the most common modifications customers perform to Oracle Fusion Applications. Typically, customers add or remove a field based on their needs. Oracle makes the process of tailoring easier for customers, and reduces the burden for their IT staff, which you can read about on the Usable Apps website or in an earlier VoX post.This is the first in a series of posts that will talk about the tools that Oracle has provided for tailoring with its family of composers. These tools are designed for business systems analysts, and they allow employees other than IT staff to make changes in an upgrade-safe and patch-friendly manner. Let’s take a deep dive into one of these composers, the Oracle Composer. Oracle Composer allows business users to modify existing UIs after they have been deployed and are in use. It is an integral component of our SaaS offering. Using Oracle Composer, users can control:     •    Who sees the changes     •    When the changes are made     •    What changes are made Change for me, change for you, change for all of youOne of the most powerful aspects of Oracle Composer is its flexibility. Oracle uses Oracle Composer to make changes for a user or group of users – those who see the changes. A user of Oracle Fusion Applications can make changes to the user interface at runtime via Oracle Composer, and these changes will remain every time they log into the system. For example, they can rearrange certain objects on a page, add and remove designated content, and save queries.Business systems analysts can make changes to Oracle Fusion Application UIs for groups of users or all users. Oracle’s Fusion Middleware Metadata Services (MDS) stores these changes and retrieves them at runtime, merging customizations with the base metadata and revealing the final experience to the end user. A tailored application can have multiple customization layers, and some layers can be specific to certain Fusion Applications. Some examples of customization layers are: site, organization, country, or role. Customization layers are applied in a specific order of precedence on top of the base application metadata. This image illustrates how customization layers are applied.What time is it?Users make changes to UIs at design time, runtime, and design time at runtime. Design time changes are typically made by application developers using an integrated development environment, or IDE, such as Oracle JDeveloper. Once made, these changes are then deployed to managed servers by application administrators. Oracle Composer covers the other two areas: Runtime changes and design time at runtime changes. When we say users are making changes at runtime, we mean that the changes are made within the running application and take effect immediately in the running application. A prime example of this ability is users who make changes to their running application that only affect the UIs they see. What is new with Oracle Composer is the last area: Design time at runtime.  A business systems analyst can make changes to the UIs at runtime but does not have to make those changes immediately to the application. These changes are stored as metadata, separate from the base application definitions. Customizations made at runtime can be saved in a sandbox so that the changes can be isolated and validated before being published into an environment, without the need to redeploy the application. What can I do?Oracle Composer can be run in one of two modes. Depending on which mode is chosen, you may have different capabilities available for changing the UIs. The first mode is view mode, the most common default mode for most pages. This is the mode that is used for personalizations or user customizations. Users can access this mode via the Personalization link (see below) in the global region on Oracle Fusion Applications pages. In this mode, you can rearrange components on a page with drag-and-drop, collapse or expand components, add approved external content, and change the overall layout of a page. However, all of the changes made this way are exclusive to that particular user.The second mode, edit mode, is typically made available to select users with access privileges to edit page content. We call these folks business systems analysts. This mode is used to make UI changes for groups of users. Users with appropriate privileges can access the edit mode of Oracle Composer via the Administration menu (see below) in the global region on Oracle Fusion Applications pages. In edit mode, users can also add components, delete components, and edit component properties. While in edit mode in Oracle Composer, there are two views that assist the business systems analyst with making UI changes: Design View and Source View (see below). Design View, the default view, is a WYSIWYG rendering of the page and its content. The business systems analyst can perform these actions: Add content – including custom content like a portlet displaying news or stock quotes, or predefined content delivered from Oracle Fusion Applications (including ADF components and task flows) Rearrange content – performed via drag-and-drop on the page or by using the actions menu of a component or portlet to move content around Edit component properties and parameters – for specific components, control the visual properties such as text or display labels, or parameters such as RSS feeds Hide or show components – hidden components can be re-shown Delete components Change page layout – users can select from eight pre-defined layouts Edit page properties – create or edit a page’s parameters and display properties Reset page customizations – remove edits made to the page in the current layer and/or reset the page to a previous state. Detailed information on each of these capabilities and the additional actions not covered in the list above can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.This image shows what the screen looks like in Design View.Source View, the second option in the edit mode of Oracle Composer, provides a WYSIWYG and a hierarchical rendering of page components in a component navigator. In Source View, users can access and modify properties of components that are not otherwise selectable in Design View. For example, many ADF Faces components can be edited only in Source View. Users can also edit components within a task flow. This image shows what the screen looks like in Source View.Detailed information on Source View can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.Oracle Composer enables any application or portal to be customized or personalized after it has been deployed and is in use. It is designed to be extremely easy to use so that both business systems analysts and users can edit Oracle Fusion Applications pages with a few clicks of the mouse. Oracle Composer runs in all modern browsers and provides a rich, dynamic way to edit JSF application and portal pages.From the editor: The next post in this series about composers will be on Data Composer. You can also catch Killian speaking about extensibility at OpenWorld 2012 and in her Faces of Fusion video.

    Read the article

  • Error! How to install glassfish in ubuntu 9.04??

    - by artaxerxe
    I try to install GlassFish on my ubuntu, and gets the error: Could not locate a suitable jar utility. Please ensure that you have Java 6 or newer installed on your system and accessible in your PATH or by setting JAVA_HOME when I type: echo $PATH it prints out: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/opt/jdk1.6.0_20:/opt/jdk1.6.0_20/bin Can anybody explain me where is the problem and how to solve it? i have also read this topic, but it doesn't work. N.B. Before to reinstall Ubuntu it works fine. This problem comes after reinstallation.

    Read the article

  • apache mod_rewrite rule in httpd.conf for modifying some paths, but not others

    - by wallyk
    I'm having quite a challenge creating an appropriate rewrite rule for Apache/2.2.14 on Fedora 10. I'm working through the CodeIgniter-Doctrine tutorial which uses an .htaccess file. (Search for Removing “index.php” from CodeIgniter urls about 10% down.) But since that's not recommended for a production server, I'm trying to tweak it to work in /etc/httpd/conf/httpd.conf. <VirtualHost *:80> ServerName ci_doctrine DocumentRoot /var/www/html/ci_doctrine ErrorLog /var/log/httpd/cid-error_log CustomLog /var/log/httpd/cid-access_log common <IfModule mod_rewrite.c> RewriteEngine on RewriteLog /var/log/httpd/cid_rewrite RewriteLogLevel 9 # RewriteCond ^/css/style.css$ (these have bad syntax, but that's beside the point) # RewriteRule ^/css/style.css$ /css/style.css [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> <IfModule !mod_rewrite.c> ErrorDocument 404 /ci_doctrine/index.php </IfModule> </VirtualHost> It seems like the tutorial .htaccess rules properly test for existing files and then not alter the URL in such cases, but the rewrite log says that the conditions are true (that is, the file does not exist) even though it's there. 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) init rewrite engine with requested uri /login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (3) applying pattern '^(.*)$' to uri '/login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/login' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/login' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) rewrite '/login' -> '/index.php//login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) local path result: /index.php//login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (1) go-ahead with /var/www/html/ci_doctrine/index.php/login [OK] 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) init rewrite engine with requested uri /login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (3) applying pattern '^(.*)$' to uri '/login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/login' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/login' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) rewrite '/login' -> '/index.php//login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) local path result: /index.php//login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (1) go-ahead with /var/www/html/ci_doctrine/index.php/login [OK] 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) init rewrite engine with requested uri /css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (3) applying pattern '^(.*)$' to uri '/css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/css/style.css' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/css/style.css' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) rewrite '/css/style.css' -> '/index.php//css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) local path result: /index.php//css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (1) go-ahead with /var/www/html/ci_doctrine/index.php/css/style.css [OK] 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) init rewrite engine with requested uri /css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (3) applying pattern '^(.*)$' to uri '/css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/css/style.css' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/css/style.css' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) rewrite '/css/style.css' -> '/index.php//css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) local path result: /index.php//css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (1) go-ahead with /var/www/html/ci_doctrine/index.php/css/style.css [OK] The file .../css/style.css was working properly before I started messing around with the rewrite rules, so it should be in the right place. But now the path is always munged up by the rewriting, though the virtual components below index.php are properly translated. What am I doing wrong?

    Read the article

  • GnuPG + Webservice + ASP.NET

    - by Karol Bladek
    Hi! I'm exhausted. I have installed GnuPG and exported secret key, and two public keys (my own and one of my client) from another instance of GnuPG. I try to configure 'my encrypting/decrypting' method on the local machine. When I run encrypting method from a little console application it works good. When I run this (same! - with the same body) method from my webservice on my local machine ... I have an ExitCode = 2. Happy in fact of catching the error message, but unhappy with their body. "gpg: no default secret key: secret key not available gpg: XXXXXXXXXXXXXXXX.xml: sign+encrypt failed: secret key not available" What should I do? Whats wrong? Best regards, Karol Bladek

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >