Search Results

Search found 11181 results on 448 pages for 'git remote'.

Page 196/448 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • Remmina remoteapp over RDP

    - by ZombieDev
    I was wondering how to use remmina to open applications on a Windows machine over rdp using remoteapp (or seamless or whatever it's actually called). I've already used Kim Knight's RemoteApp Tool to set up remoteapp on a Windows 7 machine and I can connect and run remote apps fine from another windows box. Allegedly FreeRDP (which Remmina uses for its RDP Plugin) has support for remoteapp. I'm not sure how to make use of it though. I can't find any examples of people actually doing this online but there is a launchpad bug about the clipboard not working in remote apps, from which I can infer that there is some way to run remote apps. I've tried many combinations of settings for Client, Startup Program and Startup Path in the Advanced tab when configuring an RDP connection in Remmina, but I can't make it work. I can connect to Windows boxes with RDP just fine, just not running a remoteapp.

    Read the article

  • Look Inside WebLogic Server Embedded LDAP with an LDAP Explorer

    - by james.bayer
    Today a question came up on our internal WebLogic Server mailing lists about an issue deleting a Group from WebLogic Server.  The group had a special character in the name. The WLS console refused to delete the group with the message a java.net.MalformedURLException and another message saying “Errors must be corrected before proceeding.” as shown below. The group aa:bb is the one with the issue.  Click to enlarge. WebLogic Server includes an embedded LDAP server that can be used for managing users and groups for “reasonably small environments (10,000 or fewer users)”.  For organizations scaling larger or using more high-end features, I recommend looking at one of Oracle’s very popular enterprise directory services products like Oracle Internet Directory or Oracle Directory Server Enterprise Edition.  You can configure multiple authenicators in WebLogic Server so that you can use multiple directories at the same time. I am not sure WebLogic Server supports special characters in group names for the Embedded LDAP server, but in this case both the console and WLST reported the same issue deleting the group with the special character in the name.  Here’s the WLST output: wls:/hotspot_domain/serverConfig/SecurityConfiguration/hotspot_domain/Realms/myrealm/AuthenticationProviders/DefaultAuthenticator> cmo.removeGroup('aa:bb') Traceback (innermost last): File "<console>", line 1, in ? weblogic.security.providers.authentication.LDAPAtnDelegateException: [Security:090296]invalid URL ldap:///ou=people,ou=myrealm,dc=hotspot_domain??sub?(&(objectclass=person)(wlsMemberOf=cn=aa:bb,ou=groups,ou=myrealm,dc=hotspot_domain)) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:254) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.<init>(LDAPAtnGroupMembersNameList.java:119) at weblogic.security.providers.authentication.LDAPAtnDelegate.listGroupMembers(LDAPAtnDelegate.java:1392) at weblogic.security.providers.authentication.LDAPAtnDelegate.removeGroup(LDAPAtnDelegate.java:1989) at weblogic.security.providers.authentication.DefaultAuthenticatorImpl.removeGroup(DefaultAuthenticatorImpl.java:242) at weblogic.security.providers.authentication.DefaultAuthenticatorMBeanImpl.removeGroup(DefaultAuthenticatorMBeanImpl.java:407) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at weblogic.management.jmx.modelmbean.WLSModelMBean.invoke(WLSModelMBean.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.JMXContextInterceptor.invoke(JMXContextInterceptor.java:263) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.SecurityInterceptor.invoke(SecurityInterceptor.java:444) at weblogic.management.jmx.mbeanserver.WLSMBeanServer.invoke(WLSMBeanServer.java:323) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11$1.run(JMXConnectorSubjectForwarder.java:663) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11.run(JMXConnectorSubjectForwarder.java:661) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder.invoke(JMXConnectorSubjectForwarder.java:654) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at java.security.AccessController.doPrivileged(Native Method) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1367) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at javax.management.remote.rmi.RMIConnectionImpl_WLSkel.invoke(Unknown Source) at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:667) at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:522) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146) at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518) at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207) at weblogic.work.ExecuteThread.run(ExecuteThread.java:176) Caused by: java.net.MalformedURLException at netscape.ldap.LDAPUrl.readNextConstruct(LDAPUrl.java:651) at netscape.ldap.LDAPUrl.parseUrl(LDAPUrl.java:277) at netscape.ldap.LDAPUrl.<init>(LDAPUrl.java:114) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:224) ... 41 more It’s fairly clear that in order to work that the : character needs to be URL encoded to %3A or similar.  But all is not lost, there is another way.  You can configure an LDAP Explorer like JXplorer to WebLogic Server Embedded LDAP and browse/edit the entries. Follow the instructions here, being sure to change the authentication credentials to the Embedded LDAP server to some value you know, as by default they are some unknown value.  You’ll need to reboot the WebLogic Server Admin Server after making this change. Now configure JXplorer to connect as described in the documentation.  I’ve circled the important inputs.  In this example, my domain name is “hotspot_domain” which listens on the localhost listen address and port 7001.  The cn=Admin user name is a constant identifier for the Administrator of the embedded LDAP and that does not change, but you need to know what it is so you can enter it into the tool you use. Once you connect successfully, you can explore the entries and in this case delete the group that is no longer desired.

    Read the article

  • Script to establish SSH tunnel and then run another program that uses the tunnel

    - by Rob Hills
    I am running a GUI app (Gnucash) that connects to a remote Postgres database via a secure shell session. I can use the SSH -L command to tunnel a local port and then separately run Gnucash and this works fine. What I'd like to do is use a single shell script that sets up the tunnel and then calls Gnucash. Is that possible? If so, how do I do it? Currently, I run commands like the following in 2 separate terminal windows: ssh -L 5433:127.0.0.1:19097 [email protected] gnucash postgres://gnucash@localhost:5433/gnucash_db If I simply put both lines in a shell script, the first line drops me into the remote shell and the second line doesn't execute until I exit the remote shell. TIA, Rob Hills

    Read the article

  • How do you develop web applications? [closed]

    - by ck3g
    How do you and/or your team develop your web applications? Language, framework or platform doesn't matter. I would like to know about the structure of your environment. For example: Using IDE on workstation and project files on remote host accessing via sftp. Files are saved instantly on remote host; All files are local and are uploaded on remote host during saving; Files are local, web server is running on local computer and is tested at local host. etc. You could write down also about the benefits of your approach, this will be useful for me. Thanks upd: Here must be a question and here it is: which is the best approach by your opinion?

    Read the article

  • Why can't I get a working session with vnc4server

    - by ysap
    We have a couple of (identical) Ubuntu 11.10 machines, configured with gnome-classic, which we use as remote servers, and let our clients log into personal user accounts we create for them using vnc4server. We configured all the machines in the same way, following a short manual we compiled, describing how to download, install and prepare a few tools and our software. The connection usually works fine, but today I set up a fresh machine, and experienced problems. After installing vnc4server, I ran vncpasswd and copied the following startup file to ~/.vnc/xstartup: #!/bin/sh unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS gnome-session --session=gnome-classic & [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & Then, I started vnc4server and used two viewers (the Ubuntu Remote Desktop Viewer and Windows RealVNC Client) in two other machines, but instead of getting my desktop, I see an empty window with a grey-ish background pattern like this: and the cursor is a bold X. What is wrong with the setup and why don't I get a remote session as expected?

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • xrdp setup over ssh

    - by Xianlin
    Here are the steps to install xrdp on ubuntu 12.04 and get it working: http://www.ubuntututorials.com/remote-desktop-ubuntu-12-04-windows-7/ However, I want a secure xrdp connection over ssh and I am able to achieve it by using port forwarding in the software putty as below: L1234 == localhost:3389 But I am still able to remote login to the ubuntu through xrdp connection when I am not connected using SSH. It is supposed to deny remote login when SSH is not present. In the file /etc/xrdp/xrdp.ini I tried to change the [global] section by adding "ip=127.0.0.1" and it didn't work.

    Read the article

  • How to determine if someone is accessing our database remotely?

    - by Vednor
    I own a content publishing website developed using CakePHP(tm) v 2.1.2 and 5.1.63 MySQL. It was developed by a freelance developer who kept remote access to the database which I wasn’t aware of. One day he accessed to the site and overwrote all the data. After the attack, my hosting provider disabled the remote access to our database and changed the password. But somehow he accessed the site database again and overwrote some information. We’ve managed to stop the attack second time by taking the site down immediately. But now we’re suspecting that he’ll attack again. What we could identified that he’s running a query and changing every information from the database in matter of a sec. Is there any possible way to detect the way he’s accessing our database without remote access or knowing our Cpanel password? Or to identify whether he has left something inside the site that granting him access to our database?

    Read the article

  • Continous Delivery TFS

    - by swapneel
    Is it possible to achieve Continuous Delivery using TFS e.g. Windows Service? There are 1000 posts on how to use msdeploy with TFS for WEB projects. I am trying to understand why there are no resources such as blogs, articles, msdn or best practises for Continuous Delivery for Windows service using TFS. I am not sure tow to achieve the following without any working reference materials. This is so frustrating. Archive existing codebase on Remote server for Service as well for Web project not on Integration Server please! How To Stop services on Remote server not on Integration Server Copy New code Base on Remote Server Start Services

    Read the article

  • All traffic is passed through OpenVPN although not requested

    - by BFH
    I have a bash script on a Ubuntu box which searches for the fastest openvpn server, connects, and binds one program to the tun0 interface. Unfortunately, all traffic is being passed through the VPN. Does anybody know what's going on? The relevant line follows: openvpn --daemon --config $cfile --auth-user-pass ipvanish.pass --status openvpn-status.log There don't seem to be any entries in iptables when I enter sudo iptables --list. The config files look like this: client dev tun proto tcp remote nyc-a04.ipvanish.com 443 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip ca ca.ipvanish.com.crt tls-remote nyc-a04.ipvanish.com auth-user-pass comp-lzo verb 3 auth SHA256 cipher AES-256-CBC keysize 256 tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA

    Read the article

  • Rails3 and Paperclip

    - by arkannia
    Hi, I have migrated my application from rails 2.3 to rails3 and i have a problem with paperclip. I saw there was a branch for rails3 on paperclip git. So I added "gem 'paperclip', :git = 'git://github.com/thoughtbot/paperclip.git', :branch = 'rails3'" into the Gemfile and launch the command bundle install. Once paperclip installed, the upload worked fine but not the styles. I saw a hack to fix it. # in lib/paperclip/attachment.rb at line 293 def callback which #:nodoc: # replace this line... # instance.run_callbacks(which, @queued_for_write){|result,obj| result == false } # with this: instance.run_callbacks(which, @queued_for_write) end The styles are ok after that, but i'm not able to active the processor. My code is : has_attached_file :image, :default_url => "/images/nopicture.jpg", :styles => { :large => "800x600>", :cropped => Proc.new { |instance| "#{instance.width}x#{instance.height}>" }, :crop => "300x300>" }, :processors => [:cropper] My processor is located in RAILS_APP/lib/paperclip_processors/cropper.rb and contains : module Paperclip class Cropper < Thumbnail def transformation_command if crop_command and !skip_crop? crop_command + super.sub(/ -crop \S+/, '') else super end end def crop_command target = @attachment.instance trans = ""; trans << " -crop #{target.crop_w}x#{target.crop_h}+#{target.crop_x}+#{target.crop_y}" if target.cropping? trans << " -resize \"#{target.width}x#{target.height}\"" trans end def skip_crop? ["800x600>", "300x300>"].include?(@target_geometry.to_s) end end end My problem is that i got this error message : uninitialized constant Paperclip::Cropper The cropped processor is not loaded. Is anybody has an idea to fix that ? For information my application works fine on rails 2.3.4.

    Read the article

  • How to use Sprockets Rails plugin on Heroku?

    - by Kevin
    Hi, I just deployed my Rails app to Heroku, but the Javascripts that were using Sprockets plugin don't work. I understood that, because my Heroku app is read-only, Sprockets won't work. I've found this sprockets_on_heroku plugin that should do the work, but I don't really get how to use it : I added config.gem sprockets in config/environment.rb I added sprockets in my .gems file I pushed these on Heroku and Sprockets was successfully installed I locally ran script/plugin install git://github.com/jeffrydegrande/sprockets_on_heroku.git and the plugin was successfully installed Nothing changed on Heroku, so I tried to install the plugin on Heroku with heroku plugins:install git://github.com/jeffrydegrande/sprockets_on_heroku.git, which returned sprockets_on_heroku installedbut then, a heroku restartor a heroku pluginscommand would return this: ~/.heroku/plugins/sprockets_on_heroku/init.rb:1: uninitialized constant ActionController (NameError) from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:25:in `load' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:25:in `load!' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:22:in `each' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/plugin.rb:22:in `load!' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/../lib/heroku/command.rb:14:in `run' from /opt/local/lib/ruby/gems/1.8/gems/heroku-1.8.3/bin/heroku:14 from /opt/local/bin/heroku:19:in `load' from /opt/local/bin/heroku:19 What should I do? Kevin

    Read the article

  • Trying to use a authlogic-connect as a plugin in place of gem - Server doesn't start

    - by Arkid
    I am trying to use Authlogic-connect as a plugin in Rails 3 in place of a gem. I have made an entry in the gemfile as gem "authlogic-connect", :require => "authlogic-connect", :path => "localgems" Now when I run the bundle install, it runs fine. When I try to start the server i get the error Could not find gem 'authlogic-connect (>= 0, runtime)' in source at localgems. Source does not contain any versions of 'authlogic-connect (>= 0, runtime)' Try running `bundle install`. I have placed the unzipped Gem renamed as authlogic-connect in the localgems folder. what is the problem? Here is what I get on using rails plugin install arkidmitra$ rails plugin install git://github.com/viatropos/authlogic-connect.git Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changes Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • Error // Usage: rails new APP_PATH [options] // when running 'rails server'

    - by madphill
    Background info: I'm using GIT to get a repository of a project with Ruby files in it. The project lives in my SITES folder under home directory on my Mac. I have Ruby: 1.8.7 I have just upgraded Rails to: 3.0.3 All I am trying to accomplish is to be able to render localhost.com:3000 in my browser of the GIT project i've already downloaded so I can work on it locally. I ran the command 'rails server' and was returned the message below:: Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -G, [--skip-git] # Skip Git ignores and keeps -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -f, [--force] # Overwrite files that already exist -s, [--skip] # Skip files that already exist -p, [--pretend] # Run but do not make any changes -q, [--quiet] # Supress status output Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • grailsApplication access in Grails unit Test

    - by Reza
    I am trying to write unit tests for a service which use grailsApplication.config to do some settings. It seems that in my unit tests that service instance could not access the config file (null pointer) for its setting while it could access that setting when I run "run-app". How could I configure the service to access grailsApplication service in my unit tests. class MapCloudMediaServerControllerTests { def grailsApplication @Before public void setUp(){ grailsApplication.config= ''' video{ location="C:\\tmp\\" // or shared filesystem drive for a cluster yamdi{ path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\yamdi" } ffmpeg { fileExtension = "flv" // use flv or mp4 conversionArgs = "-b 600k -r 24 -ar 22050 -ab 96k" path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffmpeg" makethumb = "-an -ss 00:00:03 -an -r 2 -vframes 1 -y -f mjpeg" } ffprobe { path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffprobe" params="" } flowplayer { version = "3.1.2" } swfobject { version = "" qtfaststart { path= "C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\qtfaststart" } } ''' } @Test void testMpegtoFlvConvertor() { log.info "In test Mpg to Flv Convertor function!" def controller=new MapCloudMediaServerController() assert controller!=null controller.videoService=new VideoService() assert controller.videoService!=null log.info "Is the video service null? ${controller.videoService==null}" controller.videoService.grailsApplication=grailsApplication log.info "Is grailsApplication null? ${controller.videoService.grailsApplication==null}" //Very important part for simulating the HTTP request controller.metaClass.request = new MockMultipartHttpServletRequest() controller.request.contentType="video/mpg" controller.request.content= new File("..\\MapCloudMediaServer\\web-app\\videoclips\\sample3.mpg").getBytes() controller.mpegtoFlvConvertor() byte[] videoOut=IOUtils.toByteArray(controller.response.getOutputStream()) def outputFile=new File("..\\MapCloudMediaServer\\web-app\\videoclips\\testsample3.flv") outputFile.append(videoOut) } }

    Read the article

  • Keeping third-party libraries under a Mercurial project: Sub-repos or not?

    - by fraktal
    Hello, We are developing a closed-source project, versionned with Mercurial. We are using two libraries in our project : One of those libraries is being developed by a third-party. They are using git, and we usually just pull from their repo once in a week to get the latest changes. The other library is being developed by ourselves, and is under active development. It must live in its own public mercurial repository, as it is licensed under LGPL. (It's a fork of a third-party LGPL component, ported to our platform) So my question is: How should I organize the source to ensure that: A developer from our team should be able to get all the source (main project + libraries) with a single "clone" command We should be able to pull easily the latest changes from the libraries, even though one of them is managed by git Should we use mercurial sub-repos functionnality, with hg-git to access to the library under git? Is it well supported by TortoiseHg and BitBucket? (pros: easy to pull library changes / cons: does it works well?) Or should we keep only snapshots of the libraries under our project? (thus, when there are new upstream changes in the libraries, we pull them to a separate place, and then copy the whole source to our project? (pros: will work / cons: pain in the ass, especially for the library that is being developed by ourselves, which is subject to a lot of daily changes)

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • Problem adding "Network Policy and Access Services" role in Server 2008

    - by Django Reinhardt
    We are encountering an Error Code 0x80070643 when attempting to add the "Network Policy and Access Services" role on a fresh Windows Server 2008 R2 installation. Is there a known solution for this problem? Here is what information we have available so far: From ServerManager.log... 2504: 2009-11-23 11:12:01.712 [CBS] installing 'IAS NT Service RasServerAll RasRoutingProtocols ' ... 2504: 2009-11-23 11:12:01.911 [CBS] ...parents that will be auto-installed: 'RasServer ' 2504: 2009-11-23 11:12:01.912 [CBS] ...default children to turn-off: '<none>' 2504: 2009-11-23 11:12:01.924 [CBS] ...current state of 'IAS NT Service': p: Staged, a: Staged, s: UninstallRequested 2504: 2009-11-23 11:12:01.924 [CBS] ...setting state of 'IAS NT Service' to 'InstallRequested' 2504: 2009-11-23 11:12:01.935 [CBS] ...current state of 'RasServerAll': p: Staged, a: Staged, s: UninstallRequested 2504: 2009-11-23 11:12:01.935 [CBS] ...setting state of 'RasServerAll' to 'InstallRequested' 2504: 2009-11-23 11:12:01.946 [CBS] ...current state of 'RasRoutingProtocols': p: Staged, a: Staged, s: UninstallRequested 2504: 2009-11-23 11:12:01.946 [CBS] ...setting state of 'RasRoutingProtocols' to 'InstallRequested' 2504: 2009-11-23 11:12:01.956 [CBS] ...current state of 'RasServer': p: Staged, a: Staged, s: UninstallRequested 2504: 2009-11-23 11:12:01.956 [CBS] ...setting state of 'RasServer' to 'InstallRequested' 2504: 2009-11-23 11:12:01.967 [CBS] ...'IAS NT Service' : applicability: Applicable 2504: 2009-11-23 11:12:01.977 [CBS] ...'RasServerAll' : applicability: Applicable 2504: 2009-11-23 11:12:01.987 [CBS] ...'RasRoutingProtocols' : applicability: Applicable 2504: 2009-11-23 11:12:01.998 [CBS] ...'RasServer' : applicability: Applicable 2504: 2009-11-23 11:12:02.906 [CbsUIHandler] Initiate: 2504: 2009-11-23 11:12:02.906 [InstallationProgressPage] Installing... 2504: 2009-11-23 11:12:54.311 [CbsUIHandler] Error: -2147023293 : 2504: 2009-11-23 11:12:54.313 [CbsUIHandler] Terminate: 2504: 2009-11-23 11:12:54.316 [InstallationProgressPage] Verifying installation... 2504: 2009-11-23 11:12:54.326 [CBS] ...done installing 'IAS NT Service RasServerAll RasRoutingProtocols '. Status: -2147023293 (80070643) 2504: 2009-11-23 11:12:54.329 [NPAS] Skipped configuration of 'Network Policy Server' because install operation failed. 2504: 2009-11-23 11:12:54.330 [NPAS] Skipped configuration of 'Remote Access Service' because install operation failed. 2504: 2009-11-23 11:12:54.330 [NPAS] Skipped configuration of 'Routing' because install operation failed. 2504: 2009-11-23 11:12:54.330 [Provider] [STAT] ---- CBS Session Consolidation ----- [STAT] For 'Network Policy Server', 'Remote Access Service', 'Routing'[STAT] installation(s) took '52.616957' second(s) total. [STAT] Configuration(s) took '0.0004948' second(s) total. [STAT] Total time: '52.6174518' second(s). From System Event Viewer... Log Name: System Source: Service Control Manager Date: 23/11/2009 11:12:23 Event ID: 7023 Task Category: None Level: Error Keywords: Classic User: N/A Computer: Av7Analytics Description: The Network Policy Server service terminated with the following error: %%-2147013892 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7023</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2009-11-23T11:12:23.653578500Z" /> <EventRecordID>1317</EventRecordID> <Correlation /> <Execution ProcessID="468" ThreadID="2308" /> <Channel>System</Channel> <Computer>Av7Analytics</Computer> <Security /> </System> <EventData> <Data Name="param1">Network Policy Server</Data> <Data Name="param2">%%-2147013892</Data> </EventData> </Event> From Setup Event Viewer... Log Name: Setup Source: Microsoft-Windows-ServerManager Date: 23/11/2009 11:12:56 Event ID: 1616 Task Category: None Level: Error Keywords: User: AV7ANALYTICS\RenamedAdmin Computer: Av7Analytics Description: Installation failed. Roles: Network Policy and Access Services Error: Attempt to install Network Policy Server failed with error code 0x80070643. Fatal error during installation Error: Attempt to install Remote Access Service failed with error code 0x80070643. Fatal error during installation Error: Attempt to install Routing failed with error code 0x80070643. Fatal error during installation The following role services were not installed: Network Policy Server Routing and Remote Access Services Remote Access Service Routing Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-ServerManager" Guid="{8C474092-13E4-430E-9F06-5B60A529BF38}" /> <EventID>1616</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x4000000000000000</Keywords> <TimeCreated SystemTime="2009-11-23T11:12:56.046431200Z" /> <EventRecordID>115</EventRecordID> <Correlation /> <Execution ProcessID="2504" ThreadID="2344" /> <Channel>Setup</Channel> <Computer>Av7Analytics</Computer> <Security UserID="S-1-5-21-2753803390-1569373846-1208217686-500" /> </System> <UserData> <EventXML xmlns:auto-ns3="http://schemas.microsoft.com/win/2004/08/events" xmlns="Event_NS"> <message> Roles: Network Policy and Access Services Error: Attempt to install Network Policy Server failed with error code 0x80070643. Fatal error during installation Error: Attempt to install Remote Access Service failed with error code 0x80070643. Fatal error during installation Error: Attempt to install Routing failed with error code 0x80070643. Fatal error during installation The following role services were not installed: Network Policy Server Routing and Remote Access Services Remote Access Service Routing </message> <identifiers>14, 206, 207, 208, 205</identifiers> </EventXML> </UserData> Thanks in advance for any help. It's quite shocking that we're already having problems with Microsoft's "latest and greatest".

    Read the article

  • apache2 doesn't start with location

    - by Geod24
    I have a small domain, which I use only for personal purposes. I'm the main user, and have at most 3-4 users at the same time. I use apache2 with passenger to serve redmine. So I start with an empty apache2: root@xxxxx:/home/# service apache2 start [ ok ] Starting web server: apache2. root@xxxxx:/home/# a2dissite Your choices are: Which site(s) do you want to disable (wildcards ok)? Then enable my site, and restart (not reload) apache2: root@xxxxx:/home/# a2ensite 200-redmine Enabling site 200-redmine. To activate the new configuration, you need to run: service apache2 reload root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] The apache2 instance did not start within 20 seconds. Please read the log files to discover problems ... (warning). root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] There are processes named 'apache2' running which do not match your pid file which are left untouched in the name of safety, Please review the situation by hand. ... (warning). root@xxxxx:/home/# pidof apache2 20948 Here's my 200-redmine.conf: PerlLoadModule Apache::Redmine <VirtualHost *:80> ServerName redmine.xxxxx.xxx DocumentRoot /var/www/redmine/public/ ErrorLog ${APACHE_LOG_DIR}/redmine.error.log CustomLog ${APACHE_LOG_DIR}/redmine.access.log common MaxRequestLen 20971520 <Directory "/var/www/redmine/public/"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> SetEnv GIT_PROJECT_ROOT /opt/git/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend/ <Location /git> PerlAuthenHandler Apache::Authn::Redmine::authen_handler PerlAccessHandler Apache::Authn::Redmine::access_handler AuthType Basic Require valid-user AuthName "Redmine Git Repository" RedmineDSN "DBI:mysql:database=redmine;host=localhost:3306" RedmineDbUser "redmine" RedmineDbPass "password" RedmineCacheCredsMax 50 </Location> </VirtualHost> Now if I comment out the ScriptAlias / stuff, it works ! In addition, starting the server with 200-redmine disabled, then enabling it works. But apache2 will die randomly. Plus the location doesn't work. The logs show nothing: root@xxxxx:/home/# ll /var/log/apache2/ total 8 drwxr-xr-x 2 root root 4096 Oct 30 07:52 coredump -rw-r--r-- 1 root root 0 Nov 4 02:39 default.access.log -rw-r--r-- 1 root root 2356 Nov 4 02:39 default.error.log -rw-r--r-- 1 root root 0 Nov 4 02:39 other_vhosts_access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.error.log root@xxxxx:/home/# ll /var/log/apache2/coredump/ total 0 root@xxxxx:/home/# cat /var/log/apache2/default.error.log [ 2013-11-04 02:39:36.0130 21471/7fcf090f4740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21470', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.0255 21474/7f9a99fda740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/request [ 2013-11-04 02:39:36.0507 21479/7f8316b0f740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/logging [ 2013-11-04 02:39:36.0511 21471/7fcf090f4740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! [ 2013-11-04 02:39:36.3158 21495/7fba6f686740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21491', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.3304 21498/7f0106d9b740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/request [ 2013-11-04 02:39:36.3522 21503/7f92ad392740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/logging [ 2013-11-04 02:39:36.3525 21495/7fba6f686740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! And at last: root@xxxxx:/home/# apache2ctl -t -D DUMP_VHOSTS VirtualHost configuration: *:80 is a NameVirtualHost default server redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) root@xxxxx:/home/# uname -a Linux xxxx.xxx 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux root@xxxxx:/home/# dpkg --list | grep apache2 ii apache2 2.4.6-3 amd64 Apache HTTP Server ii apache2-bin 2.4.6-3 amd64 Apache HTTP Server (binary files and modules) ii apache2-data 2.4.6-3 all Apache HTTP Server (common files) ii apache2-utils 2.4.6-3 amd64 Apache HTTP Server (utility programs for web servers) ii libapache2-mod-fcgid 1:2.3.9-1 amd64 FastCGI interface module for Apache 2 ii libapache2-mod-passenger 4.0.10-1 amd64 Rails and Rack support for Apache2 ii libapache2-mod-perl2 2.0.8+httpd24-r1449661-6+b1 amd64 Integration of perl with the Apache2 web server ii libapache2-mod-perl2-dev 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - development files ii libapache2-mod-perl2-doc 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - documentation ii libapache2-mod-proxy-html 1:2.4.6-3 amd64 Transitional package for apache2-bin ii libapache2-mod-svn 1.7.13-2 amd64 Apache Subversion server modules for Apache httpd ii libapache2-reload-perl 0.12-2 all module for reloading Perl modules when changed on disk ii libapache2-svn 1.7.13-2 all Apache Subversion server modules for Apache httpd (dummy package) root@xxxxx:/home/# a2dismod Your choices are: access_compat alias auth_basic authn_core authn_file authz_core authz_host authz_svn authz_user autoindex dav dav_svn deflate dir env fcgid filter mime mpm_event negotiation passenger perl proxy proxy_http rewrite setenvif status Which module(s) do you want to disable (wildcards ok)?

    Read the article

  • What is a good method to solve cabal install problems?

    - by sp3ctum
    I've used the cabal package manager for Haskell programs to install libraries and new projects that I've cloned from some repositories. More often than not, I keep running into problems. Most projects make installing them seem super easy, but in my case that's not always true - sometimes they are very hard to get running. Some are so hard, in fact, that I've lost interest in the project solely because of not being able to install it. So instead of complaining, I'd like to ask what I should do to better this situation. I'd like to use my most recent problem as an example. I'm interested in trying out the Gitit project. It's a promising looking personal wiki that runs on various version control systems. So here's what I've done: Clone from Github run cabal install in the project directory like I'm told on the project install page: mika@eka:~/git/gitit$ ls BLUETRIP-LICENSE CHANGES HCAR-gitit.tex LICENSE Network README.markdown RELANN-0.6.1 Setup.lhs TANGOICONS YUI-LICENSE data expireGititCache.hs gitit.cabal gitit.hs plugins mika@eka:~/git/gitit$ cabal install Resolving dependencies... cabal: cannot configure happstack-server-7.0.7. It requires base64-bytestring ==1.0.* For the dependency on base64-bytestring ==1.0.* there are these packages: base64-bytestring-1.0.0.0. However none of them are available. base64-bytestring-1.0.0.0 was excluded because gitit-0.10 requires base64-bytestring ==0.1.* mika@eka:~/git/gitit$ So now I'm thinking: well, I'll install happstack-server on its own, maybe that will work: mika@eka:~/git/gitit$ cabal install happstack-server Resolving dependencies... Warning: happstack-server.cabal: Ignoring unknown section type: test-suite Configuring happstack-server-7.0.7... cabal: At least the following dependencies are missing: blaze-html ==0.5.*, hslogger >=1.0.2, monad-control ==0.3.*, network >=2.2.3, sendfile >=0.7.1 && <0.8, system-filepath >=0.3.1, text >=0.10 && <0.12, threads >=0.5, transformers-base ==0.4.* cabal: Error: some packages failed to install: happstack-server-7.0.7 failed during the configure step. The exception was: ExitFailure 1 So looks like there are some dependencies missing. But isn't installing these dependencies the whole point of using cabal in the first place? What should I do? File bug reports (to which project?), install the dependencies manually or something else? Bonus points for explaining what causes these kinds of problems.

    Read the article

  • Good way to backup code projects

    - by rkmax
    I work as a developer, and I have many projects for most of them use GIT for versioning the code, I have some projects that are not code such as interfaces sketches, idea, etc.. currently I have a disorder with the backups before Dejadup used, but the downside comes when I want to restore my backups from Windows. Periodically change of operating system (Windows or Linux). my question is which is better, copy each folder (project) directly to my external disk or create a repository (git - bare init) for each project I need or should do?

    Read the article

  • Installing a bundle for TextMate

    - by m73
    I've downloaded the 30 day trial version of TextMate and wanted to use a plugin for coffeescript. The instructions for installing the plugin say to go to this directory cd ~/Library/Application\ Support/TextMate/Bundles (Textmate 1) Once I changed into the TextMate directory and started looking for Bundles by doing ls it only showed TextMate.pid In other words, no Bundles directory.... Once I was in the Bundles directory, I'm supposed to do git clone git://github.com/jashkenas/coffee-script-tmbundle CoffeeScriptBundle.tmbundle but didn't want to try that without first being in the Bundles directory.

    Read the article

  • nginx reverse proxy subdomain is redirecting

    - by holtkampw
    So I have a frontend nginx server which will proxy to several other nginx servers (running Passenger for Rails apps). Here's the part of the frontend nginx config in question: server { listen 80; server_name git.domain.com; access_log /server/domain/log/nginx.access.log; error_log /server/domain/log/nginx_error.log debug; location / { proxy_pass http://127.0.0.1:8020/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } server { listen 80; server_name domain.com; access_log /server/domain/log/nginx.access.log; error_log /server/domain/log/nginx_error.log debug; location / { proxy_pass http://127.0.0.1:8000/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Finally here's the backend for git.domain.com: server { listen 8020; #server_name localhost; root /server/gitorious/gitorious/public/; passenger_enabled on; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So here's the problem. When I type in git.domain.com, my gitorious install will redirect to domain.com. It works perfect there, but it ignores the subdomain. At first I thought it was the server_name construct. I have tried git.domain.com, domain.com, localhost, and currently none. Any ideas?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >