Search Results

Search found 11935 results on 478 pages for 'knowledge module'.

Page 404/478 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • PHP Web Server Solution (Apache/IIS)

    - by njk
    I apologize if this is too broad or belongs on Super User (please vote to move if it does). I'm in the process of creating requirements for an internal PHP web server to submit to our architecture team and would like to get some insight whether to use a Windows or *nix platform and what applications would be required. The server will host a small PHP application that will be connecting to SQL Server. The application will need to send mail. We would also like to incorporate a FTP server to allow files to be dropped in. From what I've read regarding a Windows platform using IIS, it seems as though IIS would only be advantageous if using a .NET or ASP application. Does IIS have mail functionality? Or how is mail traditionally configured (esp. on *nix)? Also, does IIS have directory configuration functionality like Apache does with .htaccess? For a Windows based solution; IIS (comes with FTP) Apache (has mod_ftp module) For a *nix based solution; Apache

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Can't get 1440x900 resolution with GRUB2 although vbeinfo says it's available

    - by TomSW
    I'm trying to use GRUB2 in graphical mode with 1440x900 resolution, but the result is always garbled nonsense: the highest resolution I can get is 1280x800. Word is from googling that long as vbeinfo lists a resolution, GRUB2 can use it. This doesn't seem to be true: vbeinfo says that 1440x900 is available but it doesn't work. Testing it from the GRUB2 command line: set gxfmode=1440x900 terminal_output gfxterm # -> garbled nonsense # back to trusty 640x480 terminal_output console The graphics card is an Intel GM965. Once linux boots the framebuffer switches to 1440x900. Added after epheminent's reply and various experiments vbeinfo lists two sets of modes. The first set runs from 0x160 to 0x16b, with resolutions 768x480, 960x600, 1280x800 and 1440x900 Then - after a bunch of text-only modes - the second set, containing resolutions 1024x768, 800x600, and 640x480 The first set of modes aren't altered by 915resolution. They all work except 1440x900. The resolution of modes in the second set can be altered using the 915resolution module / command available in GRUB2 = 1.99. # in /boot/grub/grub.cfg insmod 915resolution # 30, 32, 34 all work for me: all that varies is which modes are altered 915resolution 30 1440 900 # setting an impossible resolution changes the mode to "text-only" # in my case 1280x1024 is not supported 915resolution 30 1280 1024 Clearly, 1440x900 should just work: adding it with 915resolution is just a workaround.

    Read the article

  • Load Balancing is unusual in Apache in round-robin mode when one of the tomcat is brought down

    - by srayker
    We are facing a unusual behavior with round-robin load balancing on apache when one of the tomcat server is brought down. Our Setup: we have 2 apache web servers on the front end using mod_jk module for load balancing using round robin for load distribution. We also have enabled session stickyness. This is followed by 4 tomcat servers on which the applications are running. Sometimes under heavy load, if there is a slowness in our DB tier we find that eventually one of the tomcat goes into a hung state and would need a restart. The moment we bounce the tomcat we see a spike in requests in one of the server and this would also go into hung state and need a restart. Eventually all the server will face similar condition. What beats me is why does the Apache transfer the whole load to one server instead of distributing the load. We are now trying the worker.balancer.method=B to see if this helps to resolve our issue. Any thoughts on resolving the issue is greatly appreciated. regards, srayker

    Read the article

  • Toshiba Satellite C665 Rebooting from Standby

    - by Coodu
    I currently am working on a C665 with a strange issue. When the panel is closed the notebook will put itself to sleep in the usual way, and the power LED changes to the pulse to indicate that the device is asleep. However, when the panel is opened to resume using the notebook, the system will restart itself, instead beginning from the Toshiba logo and proceed to boot back in to W7. I should also note that each time this occurs, the "Windows Startup Recovery" option occurs, indicating that the system was not shut down correctly. Some things I have tried: Updated to latest Toshiba BIOS. Returned BIOS settings to their defaults. Swapped Memory to known good module, tested KGM in both memory slots within system. Confirmed that power settings are set to sleep/wake when power button is pressed. Ran a quick HDD fitness test using a parted magic USB stick. Checked for BSOD logs using BlueScreenView, none found. Ran src, no violations found. Any ideas? I have a good feeling the system is restarting itself, but in the event viewer there is a "Kernel Power" error, but it simply says "The system was not shut down correctly." Perhaps a bad driver? I'm not sure. Any advice would be great.

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • Does VLC Player work well on Windows 7 64-bit?

    - by ????
    I tried VLC Player on Windows 7 64-bit version and the playback was fine, but when the video is maximized the image is very pixelated. I tried this on a Radeon HD 2600 XT graphics card as well as a computer with an Intel graphics chipset and both have the same result. If I use VLC Player on Windows 7 32-bit instead of 64-bit, there is no pixelation. Is this a known problem or is there any method to fix it on Windows 7 64-bit? Update: there isn't a 32-bit vs 64-bit version of VLC player, is there? (unlike 7-Zip) I also tried GOM Player and it doesn't have the problem on Windows 7 64-bit. Update: Nov 4, 2009 VLC displays an update notice: VLC 1.0.3 is a minor release fixing many bugs, especially for Windows Vista and 7, but it also introduces 2 new modes for deinterlacing, and a new udev module. Major fixes are about WMA Pro support, Dolby tracks in 4.0, v4l/v4l2 and atsc and a crash in mjpeg demuxer. Update of translations are also part of this release.

    Read the article

  • Do I need to install mssql before I can use mssql.so with php on unix?

    - by lock
    I didn't install any MSSQL instance on my localhost that runs windows. I just used the xampp package and uncommented the modules used for mssql. The mssql server resides on another Windows Server so I believe I only needed a simple connector module. I hoped that it would be the same for Unix. But whenever I open my site on the unix production server, (i use codeigniter btw) the logs tell me it stops script execution after Database Driver Class Initialized. I am not really familiar on installing apache and friends on unix and I wasn't responsible on how the server was set-up. But it turns out that there is no mssql.so found on the php modules directory so i tried to google for one. While the forums are telling me to just compile the script, I couldn't just do that simply as I have no write access to the server and plus it seems upon installation of php, phpize didn't get installed with it too. Hope someone can shed light to me regarding this. I think its just easier if I can get a mssql.so for PHP 4.4.4

    Read the article

  • Website with login: everything works. Website without login: menu items don't redirect to content

    - by user3660755
    I wanted to put the website online. I saw it still had a login screen. It needs to be accesible for everyone. unpublishing the module did not work. I checked the user access view and took the login url out of the template manager. After this the login screen was gone at the page. But when I click on any of the website menu items, it doesn't redirect to the content. When I do have the login screen and put in the username and password all the menu's work just fine. I checked the url it is the same with or without login. How is this possible? I have been asking a lot of people, but no ones seems to be able to give me a hint. I have been searching for the solution myself for more than a week, I just don't know anymore. My guess is that there must be a conflict in the redirection, but I am not skilled enough to recognize it I am affraid. Any tips would be more than welcome. Thank you

    Read the article

  • Configure Web app for external access (IIS7), allowing only certain users via AD group. All users need internal access

    - by White Island
    We have a Web app running in IIS7 (Server 2008 R2). I now need to allow external access with an SSL certificate, so certain users (e.g. the owner of the company) can use it remotely without VPN. They want to roll out the external access only to those specific users at first (thinking: a Windows credential prompt), BUT everyone will still need access internally (HTTP), without the prompt. I have the SSL cert installed on the server and public DNS configured. I've been trying to figure out how to work the authentication/authorization. I was thinking I need to disable Anonymous authn and set Windows authn, then I keep coming back to 'URL Authorization' in my research for the group setting; however, when I tried URL authz, (removed allow all, added allow rule for the special group), it broke the site internally (403.2 Forbidden, I believe it was). I thought maybe setting up a second site in IIS pointing to the same program would work, but the exact same thing happened (and again with a new app pool, just for kicks). So I guess my question is, how would you do this: allow external access, limited to users in a specific AD group, while still allowing internal access without a credentials prompt? How do I separate the external HTTPS and internal HTTP authorization requirements? Will I need to just copy the entire contents of the app in Windows Explorer to a new folder and create my external site from that? Is Windows authentication the correct option for this? I did come across this, which refers to creating a custom module. While it sounds like a solution, it's not one I'm familiar with, and I just wondered if there is a simpler way to get it to work: http://forums.iis.net/p/1182792/2000775.aspx Thanks!

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • Bundler isn't loading gems

    - by Garrett
    I have been having a problem with using Bundler and being able to access my gems without having to require them somewhere, as config.gem used to do that for me (as far as I know). In my Rails 3 app, I defined my Gemfile like so: clear_sources source "http://gemcutter.org" source "http://gems.github.com" bundle_path "vendor/bundler_gems" ## Bundle edge rails: git "git://github.com/rails/arel.git" git "git://github.com/rails/rack.git" gem "rails", :git => "git://github.com/rails/rails.git" ## Bundle gem "mongo_mapper", :git => "git://github.com/jnunemaker/mongomapper.git" gem "bluecloth", ">= 2.0.0" Then I run gem bundle, it bundles it all up like expected. Inside the environment.rb file that is included within boot.rb it looks like this: # DO NOT MODIFY THIS FILE module Bundler file = File.expand_path(__FILE__) dir = File.dirname(file) ENV["PATH"] = "#{dir}/../../../../bin:#{ENV["PATH"]}" ENV["RUBYOPT"] = "-r#{file} #{ENV["RUBYOPT"]}" $LOAD_PATH.unshift File.expand_path("#{dir}/gems/builder-2.1.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/builder-2.1.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-hyphen-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-hyphen-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/i18n-0.3.3/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/i18n-0.3.3/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/arel/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/arel/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activemodel/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activemodel/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/jnunemaker-validatable-1.8.1/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/jnunemaker-validatable-1.8.1/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/abstract-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/abstract-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/erubis-2.6.5/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/erubis-2.6.5/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mime-types-1.16/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mime-types-1.16/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mail-2.1.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mail-2.1.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rake-0.8.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rake-0.8.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/railties/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/railties/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/memcache-client-1.7.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/memcache-client-1.7.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rack/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rack/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-test-0.5.3/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-test-0.5.3/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-mount-0.4.5/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/rack-mount-0.4.5/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionpack/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionpack/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/bluecloth-2.0.7/ext") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activerecord/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activerecord/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-format-1.0.0/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/text-format-1.0.0/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionmailer/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/actionmailer/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/tzinfo-0.3.16/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/tzinfo-0.3.16/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activesupport/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activesupport/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activeresource/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/activeresource/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/rails/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mongo-0.18.2/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/gems/mongo-0.18.2/lib") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/mongomapper/bin") $LOAD_PATH.unshift File.expand_path("#{dir}/dirs/mongomapper/lib") @gemfile = "#{dir}/../../../../Gemfile" require "rubygems" unless respond_to?(:gem) # 1.9 already has RubyGems loaded @bundled_specs = {} @bundled_specs["builder"] = eval(File.read("#{dir}/specifications/builder-2.1.2.gemspec")) @bundled_specs["builder"].loaded_from = "#{dir}/specifications/builder-2.1.2.gemspec" @bundled_specs["text-hyphen"] = eval(File.read("#{dir}/specifications/text-hyphen-1.0.0.gemspec")) @bundled_specs["text-hyphen"].loaded_from = "#{dir}/specifications/text-hyphen-1.0.0.gemspec" @bundled_specs["i18n"] = eval(File.read("#{dir}/specifications/i18n-0.3.3.gemspec")) @bundled_specs["i18n"].loaded_from = "#{dir}/specifications/i18n-0.3.3.gemspec" @bundled_specs["arel"] = eval(File.read("#{dir}/specifications/arel-0.2.pre.gemspec")) @bundled_specs["arel"].loaded_from = "#{dir}/specifications/arel-0.2.pre.gemspec" @bundled_specs["activemodel"] = eval(File.read("#{dir}/specifications/activemodel-3.0.pre.gemspec")) @bundled_specs["activemodel"].loaded_from = "#{dir}/specifications/activemodel-3.0.pre.gemspec" @bundled_specs["jnunemaker-validatable"] = eval(File.read("#{dir}/specifications/jnunemaker-validatable-1.8.1.gemspec")) @bundled_specs["jnunemaker-validatable"].loaded_from = "#{dir}/specifications/jnunemaker-validatable-1.8.1.gemspec" @bundled_specs["abstract"] = eval(File.read("#{dir}/specifications/abstract-1.0.0.gemspec")) @bundled_specs["abstract"].loaded_from = "#{dir}/specifications/abstract-1.0.0.gemspec" @bundled_specs["erubis"] = eval(File.read("#{dir}/specifications/erubis-2.6.5.gemspec")) @bundled_specs["erubis"].loaded_from = "#{dir}/specifications/erubis-2.6.5.gemspec" @bundled_specs["mime-types"] = eval(File.read("#{dir}/specifications/mime-types-1.16.gemspec")) @bundled_specs["mime-types"].loaded_from = "#{dir}/specifications/mime-types-1.16.gemspec" @bundled_specs["mail"] = eval(File.read("#{dir}/specifications/mail-2.1.2.gemspec")) @bundled_specs["mail"].loaded_from = "#{dir}/specifications/mail-2.1.2.gemspec" @bundled_specs["rake"] = eval(File.read("#{dir}/specifications/rake-0.8.7.gemspec")) @bundled_specs["rake"].loaded_from = "#{dir}/specifications/rake-0.8.7.gemspec" @bundled_specs["railties"] = eval(File.read("#{dir}/specifications/railties-3.0.pre.gemspec")) @bundled_specs["railties"].loaded_from = "#{dir}/specifications/railties-3.0.pre.gemspec" @bundled_specs["memcache-client"] = eval(File.read("#{dir}/specifications/memcache-client-1.7.7.gemspec")) @bundled_specs["memcache-client"].loaded_from = "#{dir}/specifications/memcache-client-1.7.7.gemspec" @bundled_specs["rack"] = eval(File.read("#{dir}/specifications/rack-1.1.0.gemspec")) @bundled_specs["rack"].loaded_from = "#{dir}/specifications/rack-1.1.0.gemspec" @bundled_specs["rack-test"] = eval(File.read("#{dir}/specifications/rack-test-0.5.3.gemspec")) @bundled_specs["rack-test"].loaded_from = "#{dir}/specifications/rack-test-0.5.3.gemspec" @bundled_specs["rack-mount"] = eval(File.read("#{dir}/specifications/rack-mount-0.4.5.gemspec")) @bundled_specs["rack-mount"].loaded_from = "#{dir}/specifications/rack-mount-0.4.5.gemspec" @bundled_specs["actionpack"] = eval(File.read("#{dir}/specifications/actionpack-3.0.pre.gemspec")) @bundled_specs["actionpack"].loaded_from = "#{dir}/specifications/actionpack-3.0.pre.gemspec" @bundled_specs["bluecloth"] = eval(File.read("#{dir}/specifications/bluecloth-2.0.7.gemspec")) @bundled_specs["bluecloth"].loaded_from = "#{dir}/specifications/bluecloth-2.0.7.gemspec" @bundled_specs["activerecord"] = eval(File.read("#{dir}/specifications/activerecord-3.0.pre.gemspec")) @bundled_specs["activerecord"].loaded_from = "#{dir}/specifications/activerecord-3.0.pre.gemspec" @bundled_specs["text-format"] = eval(File.read("#{dir}/specifications/text-format-1.0.0.gemspec")) @bundled_specs["text-format"].loaded_from = "#{dir}/specifications/text-format-1.0.0.gemspec" @bundled_specs["actionmailer"] = eval(File.read("#{dir}/specifications/actionmailer-3.0.pre.gemspec")) @bundled_specs["actionmailer"].loaded_from = "#{dir}/specifications/actionmailer-3.0.pre.gemspec" @bundled_specs["tzinfo"] = eval(File.read("#{dir}/specifications/tzinfo-0.3.16.gemspec")) @bundled_specs["tzinfo"].loaded_from = "#{dir}/specifications/tzinfo-0.3.16.gemspec" @bundled_specs["activesupport"] = eval(File.read("#{dir}/specifications/activesupport-3.0.pre.gemspec")) @bundled_specs["activesupport"].loaded_from = "#{dir}/specifications/activesupport-3.0.pre.gemspec" @bundled_specs["activeresource"] = eval(File.read("#{dir}/specifications/activeresource-3.0.pre.gemspec")) @bundled_specs["activeresource"].loaded_from = "#{dir}/specifications/activeresource-3.0.pre.gemspec" @bundled_specs["rails"] = eval(File.read("#{dir}/specifications/rails-3.0.pre.gemspec")) @bundled_specs["rails"].loaded_from = "#{dir}/specifications/rails-3.0.pre.gemspec" @bundled_specs["mongo"] = eval(File.read("#{dir}/specifications/mongo-0.18.2.gemspec")) @bundled_specs["mongo"].loaded_from = "#{dir}/specifications/mongo-0.18.2.gemspec" @bundled_specs["mongo_mapper"] = eval(File.read("#{dir}/specifications/mongo_mapper-0.6.10.gemspec")) @bundled_specs["mongo_mapper"].loaded_from = "#{dir}/specifications/mongo_mapper-0.6.10.gemspec" def self.add_specs_to_loaded_specs Gem.loaded_specs.merge! @bundled_specs end def self.add_specs_to_index @bundled_specs.each do |name, spec| Gem.source_index.add_spec spec end end add_specs_to_loaded_specs add_specs_to_index def self.require_env(env = nil) context = Class.new do def initialize(env) @env = env && env.to_s ; end def method_missing(*) ; yield if block_given? ; end def only(*env) old, @only = @only, _combine_only(env.flatten) yield @only = old end def except(*env) old, @except = @except, _combine_except(env.flatten) yield @except = old end def gem(name, *args) opt = args.last.is_a?(Hash) ? args.pop : {} only = _combine_only(opt[:only] || opt["only"]) except = _combine_except(opt[:except] || opt["except"]) files = opt[:require_as] || opt["require_as"] || name files = [files] unless files.respond_to?(:each) return unless !only || only.any? {|e| e == @env } return if except && except.any? {|e| e == @env } if files = opt[:require_as] || opt["require_as"] files = Array(files) files.each { |f| require f } else begin require name rescue LoadError # Do nothing end end yield if block_given? true end private def _combine_only(only) return @only unless only only = [only].flatten.compact.uniq.map { |o| o.to_s } only &= @only if @only only end def _combine_except(except) return @except unless except except = [except].flatten.compact.uniq.map { |o| o.to_s } except |= @except if @except except end end context.new(env && env.to_s).instance_eval(File.read(@gemfile), @gemfile, 1) end end module Gem @loaded_stacks = Hash.new { |h,k| h[k] = [] } def source_index.refresh! super Bundler.add_specs_to_index end end But when I try to access any of my gems, e.g. MongoMapper, Paperclip, Haml, etc. I get: NameError: uninitialized constant MongoMapper The same goes for any other gem. Does Bundler not include gems like the old Rails 2.0 did? Or is something messed up with my system? Any help would be appreciated, thank you!

    Read the article

  • TSQL Conditionally Select Specific Value

    - by Dzejms
    This is a follow-up to #1644748 where I successfully answered my own question, but Quassnoi helped me to realize that it was the wrong question. He gave me a solution that worked for my sample data, but I couldn't plug it back into the parent stored procedure because I fail at SQL 2005 syntax. So here is an attempt to paint the broader picture and ask what I actually need. This is part of a stored procedure that returns a list of items in a bug tracking application I've inherited. There are are over 100 fields and 26 joins so I'm pulling out only the mostly relevant bits. SELECT tickets.ticketid, tickets.tickettype, tickets_tickettype_lu.tickettypedesc, tickets.stage, tickets.position, tickets.sponsor, tickets.dev, tickets.qa, DATEDIFF(DAY, ticket_history_assignment.savedate, GETDATE()) as 'daysinqueue' FROM dbo.tickets WITH (NOLOCK) LEFT OUTER JOIN dbo.tickets_tickettype_lu WITH (NOLOCK) ON tickets.tickettype = tickets_tickettype_lu.tickettypeid LEFT OUTER JOIN dbo.tickets_history_assignment WITH (NOLOCK) ON tickets_history_assignment.ticketid = tickets.ticketid AND tickets_history_assignment.historyid = ( SELECT MAX(historyid) FROM dbo.tickets_history_assignment WITH (NOLOCK) WHERE tickets_history_assignment.ticketid = tickets.ticketid GROUP BY tickets_history_assignment.ticketid ) WHERE tickets.sponsor = @sponsor The area of interest is the daysinqueue subquery mess. The tickets_history_assignment table looks roughly as follows declare @tickets_history_assignment table ( historyid int, ticketid int, sponsor int, dev int, qa int, savedate datetime ) insert into @tickets_history_assignment values (1521402, 92774,20,14, 20, '2009-10-27 09:17:59.527') insert into @tickets_history_assignment values (1521399, 92774,20,14, 42, '2009-08-31 12:07:52.917') insert into @tickets_history_assignment values (1521311, 92774,100,14, 42, '2008-12-08 16:15:49.887') insert into @tickets_history_assignment values (1521336, 92774,100,14, 42, '2009-01-16 14:27:43.577') Whenever a ticket is saved, the current values for sponsor, dev and qa are stored in the tickets_history_assignment table with the ticketid and a timestamp. So it is possible for someone to change the value for qa, but leave sponsor alone. What I want to know, based on all of these conditions, is the historyid of the record in the tickets_history_assignment table where the sponsor value was last changed so that I can calculate the value for daysinqueue. If a record is inserted into the history table, and only the qa value has changed, I don't want that record. So simply relying on MAX(historyid) won't work for me. Quassnoi came up with the following which seemed to work with my sample data, but I can't plug it into the larger query, SQL Manager bitches about the WITH statement. ;WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY ticketid ORDER BY savedate DESC) AS rn FROM @Table ) SELECT rl.sponsor, ro.savedate FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.savedate FROM rows rc JOIN rows rn ON rn.ticketid = rc.ticketid AND rn.rn = rc.rn + 1 AND rn.sponsor <> rc.sponsor WHERE rc.ticketid = rl.ticketid ORDER BY rc.rn ) ro WHERE rl.rn = 1 I played with it yesterday afternoon and got nowhere because I don't fundamentally understand what is going on here and how it should fit into the larger context. So, any takers? UPDATE Ok, here's the whole thing. I've been switching some of the table and column names in an attempt to simplify things so here's the full unedited mess. snip - old bad code Here are the errors: Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 159 Incorrect syntax near ';'. Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 179 Incorrect syntax near ')'. Line numbers are of course not correct but refer to ;WITH rows AS And the ')' char after the WHERE rl.rn = 1 ) Respectively Is there a tag for extra super long question? UPDATE #2 Here is the finished query for anyone who may need this: CREATE PROCEDURE [dbo].[usp_GetProjectRecordsByAssignment] ( @assigned numeric(18,0), @assignedtype numeric(18,0) ) AS SET NOCOUNT ON WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY recordid ORDER BY savedate DESC) AS rn FROM projects_history_assignment ) SELECT projects_records.recordid, projects_records.recordtype, projects_recordtype_lu.recordtypedesc, projects_records.stage, projects_stage_lu.stagedesc, projects_records.position, projects_position_lu.positiondesc, CASE projects_records.clientrequested WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientrequested, projects_records.reportingmethod, projects_reportingmethod_lu.reportingmethoddesc, projects_records.clientaccess, projects_clientaccess_lu.clientaccessdesc, projects_records.clientnumber, projects_records.project, projects_lu.projectdesc, projects_records.version, projects_version_lu.versiondesc, projects_records.projectedversion, projects_version_lu_projected.versiondesc AS projectedversiondesc, projects_records.sitetype, projects_sitetype_lu.sitetypedesc, projects_records.title, projects_records.module, projects_module_lu.moduledesc, projects_records.component, projects_component_lu.componentdesc, projects_records.loginusername, projects_records.loginpassword, projects_records.assistedusername, projects_records.browsername, projects_browsername_lu.browsernamedesc, projects_records.browserversion, projects_records.osname, projects_osname_lu.osnamedesc, projects_records.osversion, projects_records.errortype, projects_errortype_lu.errortypedesc, projects_records.gsipriority, projects_gsipriority_lu.gsiprioritydesc, projects_records.clientpriority, projects_clientpriority_lu.clientprioritydesc, projects_records.scheduledstartdate, projects_records.scheduledcompletiondate, projects_records.projectedhours, projects_records.actualstartdate, projects_records.actualcompletiondate, projects_records.actualhours, CASE projects_records.billclient WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS billclient, projects_records.billamount, projects_records.status, projects_status_lu.statusdesc, CASE CAST(projects_records.assigned AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' WHEN '20000' THEN 'Client' WHEN '30000' THEN 'Tech Support' WHEN '40000' THEN 'LMI Tech Support' WHEN '50000' THEN 'Upload' WHEN '60000' THEN 'Spider' WHEN '70000' THEN 'DB Admin' ELSE rtrim(users_assigned.nickname) + ' ' + rtrim(users_assigned.lastname) END AS assigned, CASE CAST(projects_records.assigneddev AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assigneddev.nickname) + ' ' + rtrim(users_assigneddev.lastname) END AS assigneddev, CASE CAST(projects_records.assignedqa AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedqa.nickname) + ' ' + rtrim(users_assignedqa.lastname) END AS assignedqa, CASE CAST(projects_records.assignedsponsor AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedsponsor.nickname) + ' ' + rtrim(users_assignedsponsor.lastname) END AS assignedsponsor, projects_records.clientcreated, CASE projects_records.clientcreated WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientcreateddesc, CASE projects_records.clientcreated WHEN '1' THEN rtrim(clientusers_createuser.firstname) + ' ' + rtrim(clientusers_createuser.lastname) + ' (Client)' ELSE rtrim(users_createuser.nickname) + ' ' + rtrim(users_createuser.lastname) END AS createuser, projects_records.createdate, projects_records.savedate, projects_resolution.sitesaffected, projects_sitesaffected_lu.sitesaffecteddesc, DATEDIFF(DAY, projects_history_assignment.savedate, GETDATE()) as 'daysinqueue', projects_records.iOnHitList, projects_records.changetype FROM dbo.projects_records WITH (NOLOCK) LEFT OUTER JOIN dbo.projects_recordtype_lu WITH (NOLOCK) ON projects_records.recordtype = projects_recordtype_lu.recordtypeid LEFT OUTER JOIN dbo.projects_stage_lu WITH (NOLOCK) ON projects_records.stage = projects_stage_lu.stageid LEFT OUTER JOIN dbo.projects_position_lu WITH (NOLOCK) ON projects_records.position = projects_position_lu.positionid LEFT OUTER JOIN dbo.projects_reportingmethod_lu WITH (NOLOCK) ON projects_records.reportingmethod = projects_reportingmethod_lu.reportingmethodid LEFT OUTER JOIN dbo.projects_lu WITH (NOLOCK) ON projects_records.project = projects_lu.projectid LEFT OUTER JOIN dbo.projects_version_lu WITH (NOLOCK) ON projects_records.version = projects_version_lu.versionid LEFT OUTER JOIN dbo.projects_version_lu projects_version_lu_projected WITH (NOLOCK) ON projects_records.projectedversion = projects_version_lu_projected.versionid LEFT OUTER JOIN dbo.projects_sitetype_lu WITH (NOLOCK) ON projects_records.sitetype = projects_sitetype_lu.sitetypeid LEFT OUTER JOIN dbo.projects_module_lu WITH (NOLOCK) ON projects_records.module = projects_module_lu.moduleid LEFT OUTER JOIN dbo.projects_component_lu WITH (NOLOCK) ON projects_records.component = projects_component_lu.componentid LEFT OUTER JOIN dbo.projects_browsername_lu WITH (NOLOCK) ON projects_records.browsername = projects_browsername_lu.browsernameid LEFT OUTER JOIN dbo.projects_osname_lu WITH (NOLOCK) ON projects_records.osname = projects_osname_lu.osnameid LEFT OUTER JOIN dbo.projects_errortype_lu WITH (NOLOCK) ON projects_records.errortype = projects_errortype_lu.errortypeid LEFT OUTER JOIN dbo.projects_resolution WITH (NOLOCK) ON projects_records.recordid = projects_resolution.recordid LEFT OUTER JOIN dbo.projects_sitesaffected_lu WITH (NOLOCK) ON projects_resolution.sitesaffected = projects_sitesaffected_lu.sitesaffectedid LEFT OUTER JOIN dbo.projects_gsipriority_lu WITH (NOLOCK) ON projects_records.gsipriority = projects_gsipriority_lu.gsipriorityid LEFT OUTER JOIN dbo.projects_clientpriority_lu WITH (NOLOCK) ON projects_records.clientpriority = projects_clientpriority_lu.clientpriorityid LEFT OUTER JOIN dbo.projects_status_lu WITH (NOLOCK) ON projects_records.status = projects_status_lu.statusid LEFT OUTER JOIN dbo.projects_clientaccess_lu WITH (NOLOCK) ON projects_records.clientaccess = projects_clientaccess_lu.clientaccessid LEFT OUTER JOIN dbo.users users_assigned WITH (NOLOCK) ON projects_records.assigned = users_assigned.userid LEFT OUTER JOIN dbo.users users_assigneddev WITH (NOLOCK) ON projects_records.assigneddev = users_assigneddev.userid LEFT OUTER JOIN dbo.users users_assignedqa WITH (NOLOCK) ON projects_records.assignedqa = users_assignedqa.userid LEFT OUTER JOIN dbo.users users_assignedsponsor WITH (NOLOCK) ON projects_records.assignedsponsor = users_assignedsponsor.userid LEFT OUTER JOIN dbo.users users_createuser WITH (NOLOCK) ON projects_records.createuser = users_createuser.userid LEFT OUTER JOIN dbo.clientusers clientusers_createuser WITH (NOLOCK) ON projects_records.createuser = clientusers_createuser.userid LEFT OUTER JOIN dbo.projects_history_assignment WITH (NOLOCK) ON projects_history_assignment.recordid = projects_records.recordid AND projects_history_assignment.historyid = ( SELECT ro.historyid FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.historyid FROM rows rc JOIN rows rn ON rn.recordid = rc.recordid AND rn.rn = rc.rn + 1 AND rn.assigned <> rc.assigned WHERE rc.recordid = rl.recordid ORDER BY rc.rn ) ro WHERE rl.rn = 1 AND rl.recordid = projects_records.recordid ) WHERE (@assignedtype='0' and projects_records.assigned = @assigned) OR (@assignedtype='1' and projects_records.assigneddev = @assigned) OR (@assignedtype='2' and projects_records.assignedqa = @assigned) OR (@assignedtype='3' and projects_records.assignedsponsor = @assigned) OR (@assignedtype='4' and projects_records.createuser = @assigned)

    Read the article

  • NetBeans Development 7 - Windows 7 64-bit … JNI native calls ... a how to guide

    - by CirrusFlyer
    I provide this for you to hopefully save you some time and pain. As part of my expereince in getting to know NB Development v7 on my Windows 64-bit workstation I found another frustrating adventure in trying to get the JNI (Java Native Interface) abilities up and working in my project. As such, I am including a brief summary of steps required (as all the documentation I found was completely incorrect for these versions of Windows and NetBeans on how to do JNI). It took a couple of days of experimentation and reviewing every webpage I could find that included these technologies as keyword searches. Yuk!! Not fun. To begin, as NetBeans Development is "all about modules" if you are reading this you probably have a need for one, or more, of your modules to perform JNI calls. Most of what is available on this site or the Internet in general (not to mention the help file in NB7) is either completely wrong for these versions, or so sparse as to be essentially unuseful to anyone other than a JNI expert. Here is what you are looking for ... the "cut to the chase" - "how to guide" to get a JNI call up and working on your NB7 / Windows 64-bit box. 1) From within your NetBeans Module (not the host appliation) declair your native method(s) and make sure you can compile the Java source without errors. Example: package org.mycompanyname.nativelogic; public class NativeInterfaceTest { static { try { if (System.getProperty( "os.arch" ).toLowerCase().equals( "amd64" ) ) System.loadLibrary( <64-bit_folder_name_on_file_system>/<file_name.dll> ); else System.loadLibrary( <32-bit_folder_name_on_file_system>/<file_name.dll> ); } catch (SecurityException se) {} catch (UnsatisfieldLinkError ule) {} catch (NullPointerException npe) {} } public NativeInterfaceTest() {} native String echoString(String s); } Take notice to the fact that we only load the Assembly once (as it's in a static block), because othersise you will throw exceptions if attempting to load it again. Also take note of our single (in this example) native method titled "echoString". This is the method that our C / C++ application is going to implement, then via the majic of JNI we'll call from our Java code. 2) If using a 64-bit version of Windows (which we are here) we need to open a 64-bit Visual Studio Command Prompt (versus the standard 32-bit version), and execute the "vcvarsall" BAT file, along with an "amd64" command line argument, to set the environment up for 64-bit tools. Example: <path_to_Microsoft_Visual_Studio_10.0>/VC/vcvarsall.bat amd64 Take note that you can use any version of the C / C++ compiler from Microsoft you wish. I happen to have Visual Studio 2005, 2008, and 2010 installed on my box so I chose to use "v10.0" but any that support 64-bit development will work fine. The other important aspect here is the "amd64" param. 3) In the Command Prompt change drives \ directories on your computer so that you are at the root of the fully qualified Class location on the file system that contains your native method declairation. Example: The fully qualified class name for my natively declair method is "org.mycompanyname.nativelogic.NativeInterfaceTest". As we successfully compiled our Java in Step 1 above, we should find it contained in our NetBeans Module something similar to the following: "/build/classes/org/mycompanyname/nativelogic/NativeInterfaceTest.class" We need to make sure our Command Prompt sets, as the current directly, "/build/classes" because of our next step. 4) In this step we'll create our C / C++ Header file that contains the JNI required statments. Type the following in the Command Prompt: javah -jni org.mycompanyname.nativelogic.NativeInterfaceTest and hit enter. If you receive any kind of error that states this is an unrecognized command that simply means your Windows computer does not know the PATH to that command (it's in your /bin folder). Either run the command from there, or include the fully qualified path name when invoking this application, or set your computer's PATH environmental variable to include that path in its search. This should produce a file called "org_mycompanyname_nativelogic_NativeInterfaceTest.h" ... a C Header file. I'd make a copy of this in case you need a backup later. 5) Edit the NativeInterfaceTest.h header file and include an implementation for the echoString() method. Example: JNIEXPORT jstring JNICALL Java_org_mycompanyname_nativelogic_NativeInterfaceTest_echoString (JNIEnv *env, jobject jobj, jstring js) { return((*env)->NewStringUTF(env, "My JNI is up and working after lots of research")); } Notice how you can't simply return a normal Java String (because you're in C at the moment). You have to tell the passed in JVM variable to create a Java String for you that will be returned back. Check out the following Oracle web page for other data types and how to create them for JNI purposes. 6) Close and Save your changes to the Header file. Now that you've added an implementation to the Header change the file extention from ".h" to ".c" as it's now a C source code file that properly implements the JNI required interface. Example: NativeInterfaceTest.c 7) We need to compile the newly created source code file and Link it too. From within the Command Prompt type the following: cl /I"path_to_my_jdks_include_folder" /I"path_to_my_jdks_include_win32_folder" /D:AMD64=1 /LD NativeInterfaceTest.c /FeNativeInterfaceTest.dll /link /machine:x64 Example: cl /I"D:/Program Files/Java/jdk1.6.0_21/include" /I"D:/Program Files/java/jdk1.6.0_21/include/win32" /D:AMD64=1 /LD NativeInterfaceTest.c /FeNativeInterfaceTest.dll /link /machine:x64 Notice the quotes around the paths to the 'include" and 'include/win32' folders is required because I have spaces in my folder names ... 'Program Files'. You can include them if you have no spaces without problems, but they are mandatory if you have spaces when using a command prompt. This will generate serveral files, but it's the DLL we're interested in. This is what the System.loadLirbary() java method is looking for. 8) Congratuations! You're at the last step. Simply take the DLL Assembly and paste it at the following location: <path_of_NetBeansProjects_folder>/<project_name>/<module_name>/build/cluster/modules/lib/x64 Note that you'll probably have to create the "lib" and "x64" folders. Example: C:\Users\<user_name>\Documents\NetBeansProjects\<application_name>\<module_name>\build\cluster\modules\lib\x64\NativeInterfaceTest.dll Java code ... notice how we don't inlude the ".dll" file extension in the loadLibrary() call? System.loadLibrary( "/x64/NativeInterfaceTest" ); Now, in your Java code you can create a NativeInterfaceTest object and call the echoString() method and it will return the String value you typed in the NativeInterfaceTest.c source code file. Hopefully this will save you the brain damage I endured trying to figure all this out on my own. Good luck and happy coding!

    Read the article

  • Numpy zero rank array indexing/broadcasting

    - by Lemming
    I'm trying to write a function that supports broadcasting and is fast at the same time. However, numpy's zero-rank arrays are causing trouble as usual. I couldn't find anything useful on google, or by searching here. So, I'm asking you. How should I implement broadcasting efficiently and handle zero-rank arrays at the same time? This whole post became larger than anticipated, sorry. Details: To clarify what I'm talking about I'll give a simple example: Say I want to implement a Heaviside step-function. I.e. a function that acts on the real axis, which is 0 on the negative side, 1 on the positive side, and from case to case either 0, 0.5, or 1 at the point 0. Implementation Masking The most efficient way I found so far is the following. It uses boolean arrays as masks to assign the correct values to the corresponding slots in the output vector. from numpy import * def step_mask(x, limit=+1): """Heaviside step-function. y = 0 if x < 0 y = 1 if x > 0 See below for x == 0. Arguments: x Evaluate the function at these points. limit Which limit at x == 0? limit > 0: y = 1 limit == 0: y = 0.5 limit < 0: y = 0 Return: The values corresponding to x. """ b = broadcast(x, limit) out = zeros(b.shape) out[x>0] = 1 mask = (limit > 0) & (x == 0) out[mask] = 1 mask = (limit == 0) & (x == 0) out[mask] = 0.5 mask = (limit < 0) & (x == 0) out[mask] = 0 return out List Comprehension The following-the-numpy-docs way is to use a list comprehension on the flat iterator of the broadcast object. However, list comprehensions become absolutely unreadable for such complicated functions. def step_comprehension(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) out.flat = [ ( 1 if x_ > 0 else ( 0 if x_ < 0 else ( 1 if l_ > 0 else ( 0.5 if l_ ==0 else ( 0 ))))) for x_, l_ in b ] return out For Loop And finally, the most naive way is a for loop. It's probably the most readable option. However, Python for-loops are anything but fast. And hence, a really bad idea in numerics. def step_for(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) for i, (x_, l_) in enumerate(b): if x_ > 0: out[i] = 1 elif x_ < 0: out[i] = 0 elif l_ > 0: out[i] = 1 elif l_ < 0: out[i] = 0 else: out[i] = 0.5 return out Test First of all a brief test to see if the output is correct. >>> x = array([-1, -0.1, 0, 0.1, 1]) >>> step_mask(x, +1) array([ 0., 0., 1., 1., 1.]) >>> step_mask(x, 0) array([ 0. , 0. , 0.5, 1. , 1. ]) >>> step_mask(x, -1) array([ 0., 0., 0., 1., 1.]) It is correct, and the other two functions give the same output. Performance How about efficiency? These are the timings: In [45]: xl = linspace(-2, 2, 500001) In [46]: %timeit step_mask(xl) 10 loops, best of 3: 19.5 ms per loop In [47]: %timeit step_comprehension(xl) 1 loops, best of 3: 1.17 s per loop In [48]: %timeit step_for(xl) 1 loops, best of 3: 1.15 s per loop The masked version performs best as expected. However, I'm surprised that the comprehension is on the same level as the for loop. Zero Rank Arrays But, 0-rank arrays pose a problem. Sometimes you want to use a function scalar input. And preferably not have to worry about wrapping all scalars in at least 1-D arrays. >>> step_mask(1) Traceback (most recent call last): File "<ipython-input-50-91c06aa4487b>", line 1, in <module> step_mask(1) File "script.py", line 22, in step_mask out[x>0] = 1 IndexError: 0-d arrays can't be indexed. >>> step_for(1) Traceback (most recent call last): File "<ipython-input-51-4e0de4fcb197>", line 1, in <module> step_for(1) File "script.py", line 55, in step_for out[i] = 1 IndexError: 0-d arrays can't be indexed. >>> step_comprehension(1) array(1.0) Only the list comprehension can handle 0-rank arrays. The other two versions would need special case handling for 0-rank arrays. Numpy gets a bit messy when you want to use the same code for arrays and scalars. However, I really like to have functions that work on as arbitrary input as possible. Who knows which parameters I'll want to iterate over at some point. Question: What is the best way to implement a function as the one above? Is there a way to avoid if scalar then like special cases? I'm not looking for a built-in Heaviside. It's just a simplified example. In my code the above pattern appears in many places to make parameter iteration as simple as possible without littering the client code with for loops or comprehensions. Furthermore, I'm aware of Cython, or weave & Co., or implementation directly in C. However, the performance of the masked version above is sufficient for the moment. And for the moment I would like to keep things as simple as possible.

    Read the article

  • RDS installation failure on 2012 R2 Server Core VM in Hyper-V Server

    - by Giles
    I'm currently installing a test-bed for my firms Infrastructure replacement. 10 or so Windows/Linux servers will be replaced by 2 physical servers running Hyper-V server. All services (DC, RDS, SQL) will be on Windows 2012 R2 Server Core VMs, Exchange on Server 2012 R2 GUI, and the rest are things like Elastix, MailArchiver etc, which aren't part of the equation thus far. I have installed Hyper-V server on a test box, and sucessfully got two virtual DC's running, SQL 2014 running, and 8.1 which I use for the RSAT tools. When trying to install RDS (The old fashioned kind, not the newer VDI(?) style), I get a failed installation due to the server not being able to reboot. A couple of articles have said not to do it locally, so I've moved on. Sitting at the Powershell prompt on the Domain Controller or SQL server (Both Server Core), I run the following commands: Import-Module RemoteDesktop New-SessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost "AlstersTS.Alsters.local" The installation begins, carries on for 2 or 3 minutes, then I receive the following error message: New-SessionDeployment : Validation failed for the "RD Connection Broker" parameter. AlstersTS.Alsters.local Unable to connect to the server by using WindowsPowerShell remoting. Verify that you can connect to the server. At line:1 char:1 + NewSessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost " ... + + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorID : Microsoft.PowerShell.Commands.WriteErrorException,New-SessionDeployment So far, I have: Triple, triple checked syntax. Tried various other commands, and a script to accomplish the same task. Checked DNS is functioning as it should. Checked to the best of my knowledge that AD is working as it should. Checked that the Network Service has the needed permissions. Created another VM and placed the two roles on different servers. Deleted all VMs, started again with a new domain name (Lather, rinse, repeat) Performed the whole installation on a second physical box running Hyper-V Server Pleaded with it Interestingly, if I perform the installation via a GUI installation, the thing just works! Now I know I could convert this to a Server Core role after installation, but this wouldn't teach me what was wrong in the first instance. I've probably got 10 pages through various Google searches, each page getting a little less relevant. The closest matches seem to have good information, but it doesn't seem to be the fix for my set-up. As a side note, I expected to be able to "tee" or "out-file" the error message into a text file, but couldn't get that to work either, so I've typed in the error message manually. Chaps, any suggestions, from the glaringly obvious, to the long-winded and complex? Thanks!

    Read the article

  • Cannot login to SQL Server 2008 R2 with Windows authentication

    - by Ian Boyd
    When i try to connect to SQL Server (2008 R2) using Windows authentication: i cannot: Checking the Windows Application event log, i find the error: Login failed for user 'AVATOPIA\ian'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ] Log Name: Application Source: MSSQLSERVER Event ID: 18456 Level: Information User: AVATOPIA\ian OpCode: Task Category: Logon i can login to the computer itself using Windows authentication. i can log into SQL Server using the local Windows Administrator account. We can connect to 8 other SQL Servers on the domain using Windows Authentication. Just this one, whitch is the only one that is 2008 R2 is failing. So i assume it's a bug with *2008 R2. Note: i cannot logon locally, or remotely, using Windows authentication. i can login locally and remotely using SQL Server Authentication. Update Note: It's not limited to SQL Server Management Studio, standalone applications that connect using Windows authentication: fail: Note: It's not a client problem, as we can connect fine to other (non-SQL Server 2008 R2 machines): i'm sure there's a technote or knowledge base article describing why SQL Server 2008 R2 is broken by default, but i can't find it. Update 2 Matt figure out the change that Microsoft made so that SQL Server 2008 R2 is broken by default: Administrators are no longer administrators All that remains is to figure out how to make Administrators administrators. One of these days i'm going to start a list of changes around Microsoft's "broken by default" initiative. Steps to reproduce the problem How do i add a group to the sysadmin fixed server role? Here's the steps i try, that don't work: Click Add: Click Object Types: Ensure that you have no ability to add groups: and click OK. Under Enter the object names to select, enter Administrators: Click Check Names, and ensure that you are not allowed to add groups: and click Cancel. Click Browse..., and ensure that you have no ability to add groups: You should now still not have added any group to the sysadmin role. Additional information SQL Server Management Studio is being run as an administrator: SQL Server is set to use Windows Authentication: tried while logged into SQL with both sa and the only other sysadmin domain account (screenshot can be supplied for those who don't believe)

    Read the article

  • How do I protect a low budget network from rogue DHCP servers?

    - by Kenned
    I am helping a friend manage a shared internet connection in an apartment buildling with 80 apartments - 8 stairways with 10 apartments in each. The network is laid out with the internet router at one end of the building, connected to a cheap non-managed 16 port switch in the first stairway where the first 10 apartments are also connected. One port is connected to another 16 port cheapo switch in the next stairway, where those 10 apartments are connected, and so forth. Sort of a daisy chain of switches, with 10 apartments as spokes on each "daisy". The building is a U-shape, approximately 50 x 50 meters, 20 meters high - so from the router to the farthest apartment it’s probably around 200 meters including up-and-down stairways. We have a fair bit of problems with people hooking up wifi-routers the wrong way, creating rogue DHCP servers which interrupt large groups of the users and we wish to solve this problem by making the network smarter (instead of doing a physical unplugging binary search). With my limited networking skills, I see two ways - DHCP-snooping or splitting the entire network into separate VLANS for each apartment. Separate VLANS gives each apartment their own private connection to the router, while DHCP snooping will still allow LAN gaming and file sharing. Will DHCP snooping work with this kind of network topology, or does that rely on the network being in a proper hub-and-spoke-configuration? I am not sure if there are different levels of DHCP snooping - say like expensive Cisco switches will do anything, but inexpensive ones like TP-Link, D-Link or Netgear will only do it in certain topologies? And will basic VLAN support be good enough for this topology? I guess even cheap managed switches can tag traffic from each port with it’s own VLAN tag, but when the next switch in the daisy chain receives the packet on it’s “downlink” port, wouldn’t it strip or replace the VLAN tag with it’s own trunk-tag (or whatever the name is for the backbone traffic). Money is tight, and I don’t think we can afford professional grade Cisco (I have been campaigning for this for years), so I’d love some advice on which solution has the best support on low-end network equipment and if there are some specific models that are recommended? For instance low-end HP switches or even budget brands like TP-Link, D-Link etc. If I have overlooked another way to solve this problem it is due to my lack of knowledge. :)

    Read the article

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

  • Ubuntu Server 10.04 Heavy Network Traffic causes disconnect

    - by K Vaughan
    I'm currently running a headless Ubuntu 10.04 server. Installed is the LAMP stack, Joomla, Virtualbox, phpvirtualbox, webmin and proFTP.. It resolves the IP address so I can access it remotely (either the apache2 webserver or the FTP) using DDClient. Any packages installed have been installed using apt-get. Webmin, although discouraged in Ubuntu Server, is used mostly to administer the webserver aspect. This issue also appeared when I was using Ubuntu Server 10.10. After periods of heavy network traffic, whether local or remote, the connect drops. I'm talking specifically about the transfer of files via FTP, SCP or Samba (the latter of which I seldom use). There is no response to ping or ssh. I can't FTP to the server nor can I load the website. There are times when the server has been on for a few days and everything runs fine because I haven't accessed it much, if at all (thus not much network traffic). I've gone through a few hardware changes although I don't believe this has cause the issue: this has been happening long before I made any changes. At first I thought it was my ISP-provided router blocking traffic because of some kind of misconfiguration (perhaps assuming it was some kind of DoS attack). I've changed routers and still found no success. I've checked syslog, dmesg and kern.log for warnings but have uncovered none. I've ran memtest via the GRUB2 menu at boot and once it turned up 4 errors. I ran again with individual sticks of RAM in various slots and everything turned up fine. I've looked through the BIOS settings and everything looks fine. I've tried unplugging unnecessary pieces of hardware (other internal hard drives, CD drives, floppy, PCI cards, etc). Any help or tips on how I can even begin to troubleshoot this would be very much appreciated. Please note that i've only started playing with servers as a hobby so my knowledge wouldn't be the most refined. I'm comfortable with command line and have the initiative to know how to look up something I can't do. Unfortunately I can't seem to find any issues like this. Additionally: If a solution can't be found some assistance to write a script that will cause the server to reboot automatically if, after x minutes, it gets no response to pinging somewhere like google. Admittedly that's not the cleanest solution should my internet end up going down but I can't think of what else to do.

    Read the article

  • Windows XP long login (15 minutes +)

    - by Emily Pinkerton
    I'm having a lot of issues with our Windows XP SP3 machines (about 5, but every week another gets on the bandwagon of this issue). They take forever (15 minutes) to apply the user settings once our employee's enter their username and password to login to our domain. It only happens say if a user has reboot the machine and then when they go to log back in then it hangs forever. Reboot and restart are the key words for sure I've noticed with this issue. Here are things I have tested: •Made sure the DNS was set to point to our two servers (Server01 & Server02 are DNS Domain Controllers, 01 is primary and 02 backup). •No major changes have been applied to our network. •All profiles are local, so I have deleted out local profiles that aren't being used on those machines that run slow. •Also I have tried to enable and disable the Enable Fast Login under the local machines GP. It was not configured originally and when I tested both, it made the computer hang on "applying computer settings" for about 15 minutes. When it finally came up to the login screen the it was very quick to login to the domain. However this doesn't fix my issue, and even more frustrating upon setting it back to being not configured it now still takes for forever to apply computer settings. •I enabled the userenv log and here is what I see, but my experience is limited and I'm not sure how to read it exactly. (see below for log, this isn't the whole thing because it's really long) USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: LoadUserProfileP succeeded USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: Returning success. Final Information follows: USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-UserName = USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-lpProfilePath = < USERENV(2ec.2f0) 10:50:41:843 lpProfileInfo-dwFlags = 0x0 USERENV(2ec.2f0) 10:50:41:843 LoadUserProfile: Returning TRUE. hProfile = <0x818 USERENV(2ec.2f0) 10:50:41:984 IsSyncForegroundPolicyRefresh: Synchronous, Reason:NonCachedCredentials USERENV(2ec.248) 10:50:41:984 IsSyncForegroundPolicyRefresh: Synchronous, Reason:NonCachedCredentials USERENV(3c4.3dc) 10:51:26:166 LibMain: Process Name: C:\WINDOWS\system\wbem\wmiprvse.exe USERENV(2ec.5cc) 11:05:08:741 ProcessGPOs: network name is 192.168.49.0 USERENV(4a8.888) 11:05:08:804 GetProfileType: Profile already loaded. USERENV(4a8.888) 11:05:08:804 LoadProfileInfo: Failed to query central profile with error 2 USERENV(4a8.888) 11:05:08:804 GetProfileType: ProfileFlags is 0 Also this error is in the file quite a lot: USERENV(328.5bc) 11:05:29:733 GetUserDNSDomainName: Failed to impersonate user USERENV(328.834) 11:05:29:733 ImpersonateUser: Failed to impersonate user with 5. I'm really not sure what else to do with my limited experience, but I'm hoping someone can help me. I feel like I'm dealing with an issue way above my level and any knowledge I can gain out of getting this issue fixed would be amazing.

    Read the article

  • Unable to connect to SVN server on VPS : Subversion Configuration Problem

    - by Pritam Barhate
    Hello everybody, I purchased a VPS account (centos-5-x86_64) from Hostgator mainly for the purpose of setting up an online Subversion Server. This is the first time I am managing a VPS and my linux skills are not so great, but I have basic knowledge of the OS and have been using it on and off as desktop from last few years. So after foxing through a few tutorials online, as the first step I logged in using SSH root account provided by Hostgator and tried to run, yum install mod_dav_svn subversion As it turns out my account has Cpanel/WHM and since it some concept of easy apache straight forward procedure of yum install mod_dav_svn subversion won't work. After that I found out how it can be worked out by compiling the source and stuff. But the whole procedure looked long and scary. [I am just a linux nub]. so I decided I would just skip whole apache integration stuff and just access the server using svn:// protocol, anyways that's how I configure svn on our LAN. So I installed subversion using yum install subversion It installed fine. Then I created a folder /svn_repos/testproject and ran svnadmin create /svn_repos/testproject/ No Problems Using vi I changed svnserve.conf and passwd files for the repository and added a user with my name. Anonymous users don't have any access, authenticated users have write access. Then I started svnserve using svnserve -d then in same terminal window svn list svn://localhost/svn_repos/testproject Asks for authentication for realm, provided root password then for svn username and password. Provided both. The command returns nothing but exists properly. Returns nothing is understood I didn't import anything. But if try to access svn remotely using in another terminal: svn list svn://ip.add.of.server/svn_repos/testproject svn: Can't connect to host 'ip.add.of.server': Operation timed out Is what I get. Parallels Power Panel that I got from Hostgator reports that: The firewall is not active now. To activate the firewall, choose one of the firewall operation modes. So if firewall is not running and I can access svn using localhost, why the operation is timing out when I try to access svn from a remote machine? Experienced network admins please help. Thanks in advance. Also please suggest a good book which gives detailed information on configuring Dedicated servers + WHM and CPanel.

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >