Search Results

Search found 21263 results on 851 pages for 'website deployment'.

Page 278/851 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Maven2: Best practise for Enterprise Project (EAR file)

    - by Maik
    Hello everyone, I am just switching from Ant to Maven and am trying to figure out the best practice to set up a EAR file based Enterprise project? Lets say I want to create a pretty standard project with a jar file for the EJBs, a WAR file for the Web tier and the encapsulating EAR file, with the corresponding deployment descriptors. How would I go about it? Create the project with archetypeArtifactId=maven-archetype-webapp liek a war file and extend from there? What is the best project structure (and POM file example) for this? Where do you stick the ear file related deployment descriptors, etc? Thanks for any help.

    Read the article

  • msbuild/clickonce publish files generated during the build

    - by Anonym
    As a part of my build process I generate some files that should be included when creating a clickonce deployment. Here is a blog post of someone telling you how to include items that's not part of the project. However, as someone mentions in the comments of that blogpost, it won't update the deploymentmanifest when you do it in the "BeforePublish" task and the files won't get downloaded - it works fine if you do it in the "BeforeBuild" task though. This gives me a chicken and egg problem as I have to perform the build first to generate the files I want included.. Does anyone have a solution for this ? (p.s. at the moment generating the clickonce deployment using mage.exe is not an option, it have to be done using the Publish target)

    Read the article

  • WCF facility : Metadata publishing for this service is currently disabled

    - by cvista
    I asked this before and got no where so i'm asking again as i'm now desperate!! Hey if i create a new wcf project i can browse the meta instantly. if I try - when using the WCF facility - i get the following: Metadata publishing for this service is currently disabled. i followed the instructions there and in a million other places and get no where. if i copy the contents of my faciltity service into the newly created project it complains that aspNetCompatibilityEnabled isnt enabled. so i enable it and then mex is disabled again and i get: Metadata publishing for this service is currently disabled. again!! this is driving me crazy - i have tried tried tried to follow every example on the web!! here is my current configuration - there is no client yet: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service name="IbzStar.WebServices.UserServices" behaviorConfiguration="ServiceBehavior"> <!-- Service Endpoints --> <endpoint address="" binding="wsHttpBinding" contract="IbzStar.WebServices.IUserServices"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> please someone help me out before my laptop gets launched into orbit!! w://

    Read the article

  • Merging MySQL Structure and Data

    - by Shahid
    I have a MySQL database running on a deployment machine which also contains data. Then I have another MySQL database which has evolved in terms of STRUCTURE + DATA for some time. I need a way to merge the changes (ONLY) for both structure and data to the DB in deployment machine without disturbing the existing data. Does anyone know of a tool available which can do this safely. I have had a look at a few comparison tools but I need a tool which can automate the merge operation. Note also that most of the data in the tables is in BINARY so I can't use many file comparison tools. Does any one know of a solution to this? thanks

    Read the article

  • Ant deploy WAR file to tomcat server failure

    - by spowers
    I am having a little trouble getting my deployment task in ant to function. I have a war file that is generated as part of the build process, and I am now trying to autodeploy that to my test server. In ant I have defined a deployment task as seen below. When I try to run it I get a file not found error on the server in the catalina.out log file. Does anyone have any idea what I am doing wrong that is causing this deploy to not function? I have checked the path, and it is correct and the WAR file exists. Thanks username="${lamp.user}" password="${lamp.password}" update="true" path="/beam" localWar="file:${module.beam.basedir}\out\war\beam.war" /

    Read the article

  • Minimum OS version number, iPhone app

    - by Michael Frost
    Hi all I've built an iPhone app which is live in the app-store. When originally submitting the app it showed up in App Store as requiring iPhone OS 3.1.3. When later updating the app I made sure my settings in Xcode for the target for the app store build had the Base SDK version set to 3.1.3 and the Deployment Target version set to 3.0, however it still shows up in app store as requiring 3.1.3. From what I've understood the Deployment Target version is the one setting the requirement in app store? Or is there any information concerning this that I should have updated in iTunes Connect when submitting the updated app? Thanks, Michael

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

  • How to regularly merge two git repositories, one with submodules into one without

    - by smoothify
    I maintain a Drupal project in a git repository containing submodules. This works well for me overall, and I like the submodule approach. However, I would like to move my site to a hosting provider that offers deployment via git push but doesn't work with submodules. I would like to keep my current repository intact, and then when I'm ready to deploy, I would like to merge the changes from my repository into the deployment repository, but any submodules need to be exported into the tree. So, it needs to be (semi) automated, so I can just run a command or two and initiate the merge, and then push to the server. Ideally it would keep track of individual commits, but I wouldn't mind if it squashed them into a single commit. How would be the most effective way to achieve this?

    Read the article

  • getting service from wsdd via xpath not wroking (xmltask)

    - by subes
    Hi, I am trying to get the XPath "/deployment/service". Tested on this site: http://www.xmlme.com/XpathTool.aspx <?xml version="1.0" encoding="UTF-8" standalone="no"?> <deployment xmlns="http://xml.apache.org/axis/wsdd/" xmlns:java="http://xml.apache.org /axis/wsdd/providers/java"> <service name="kontowebservice" provider="java:RPC" style="rpc" use="literal"> <parameter name="wsdlTargetNamespace" value="http://strategies.spine"/> <parameter name="wsdlServiceElement" value="ExposerService"/> <parameter name="wsdlServicePort" value="kontowebservice"/> <parameter name="className" value="dmd4biz.container.webservice.konto.internal.KontoWebServiceImpl_WS"/> <parameter name="wsdlPortType" value="Exposer"/> <parameter name="typeMappingVersion" value="1.2"/> <operation xmlns:operNS="http://strategies.spine" xmlns:rtns="http://www.w3.org/2001/XMLSchema" name="expose" qname="operNS:expose" returnQName="exposeReturn" returnType="rtns:anyType" soapAction=""> <parameter xmlns:tns="http://www.w3.org/2001/XMLSchema" qname="in0" type="tns:anyType"/> </operation> <parameter name="allowedMethods" value="expose"/> <parameter name="scope" value="Request"/> </service> </deployment> I absolutely can't find out why it always tells me that my xpath does not match... This may be stupid, but am I missing something? EDIT Thanks to the answer from Dimitre Novatchev I was able to find a workaround: <xmltask failwithoutmatch="true" report="false"> <fileset dir="${src.gen}/" includes="**/*-deploy.wsdd" /> <copy path="//*[local-name()='service']" buffer="tmpServiceBuf" append="true" /> </xmltask> <xmltask failwithoutmatch="true" report="false" source="${basedir}/env/axis/WEB-INF/server-config.wsdd" dest="${build.stage}/resources/WEB-INF/server-config.wsdd"> <insert path="//*[local-name()='transport'][last()]" buffer="tmpServiceBuf" position="after" /> </xmltask> Binding namespaces with xmltask (which is the tool that gave me the headaches) seems not to be possible. The code above did the trick.

    Read the article

  • Beginner servlet question: accessing files in a .war, which path?

    - by Navigateur
    When a third-party library I'm using tries to access a file, I'm getting "Error opening ... file ... (No such file or directory)" even though I KNOW the file is in the WAR. I've tried both packaged (.war) and "exploded" (directory) deployment, and the file is definitely there. I've tried setting full permissions on it too. It's on Unix (Ubuntu). File is war/dict/index.sense and the error is "dict/index.sense (No such file or directory)". It works fine on my Windows computer when running in hosted mode as a GWT app from Eclipse, just not when I transfer it to the Unix machine for deployment. My question is: has anybody experienced this before and/or are there differences in relative path that I should consider i.e. what's the root path for relative file access in a war?

    Read the article

  • Helicon ISAPI Rewrite Proxy 500 Internal Server Error

    - by Rob Stevenson-Leggett
    Hi, I have a website running at www.domain.com. The client now wants the website to appear to be running under www.otherdomain.com/whatson/brand/ Since the website is umbraco it won't run under a subfolder. I wanted to use ISAPI rewrite to proxy requests to www.domain.com using the following rule in a .htaccess at www.otherdomain.com/whatson/brand/ RewriteRule ^(.*)$ http://www.domain.com/$1 [P,L] However, when I apply this I get an ugly 500 Internal Server Error. There's nothing in the event log. So I turned on ISAPI logging and can see the following 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) init rewrite engine with requested uri /whatson/brand/home.aspx Then it testing all the other rewrite rules on the server. Then this 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (1) Htaccess process request w:\websites\otherdomain.com\docs2\whatson\brand\.htaccess 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (3) applying pattern '^(.*)$' to uri 'home.aspx' 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) forcing proxy-throughput with http://www.domain.com/home.aspx 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (1) go-ahead with proxy request http://www.domain.com/home.aspx [OK] 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) rewrite 'home.aspx' -> '/whatson/brand/home.aspxx.rwhlp?p=0' 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) internal redirect with /whatson/brand/home.aspxx.rwhlp?p=0 [INTERNAL REDIRECT] So it appears to work according to the logs, but I'm not seeing the page come through.. It's worth noting that www.domain.com and www.otherdomain.com are on the same box. LogLevel is 3 and RewriteLogLevel is 3 (I've tried with 9 and debug but there is too much traffic going through the other sites on the box) Any ideas?

    Read the article

  • Install a web certificate on an Android device

    - by martani_net
    To gain access to WIFI at university I have to login with my user/pass credentials. The certificate of their website (the local home page that asks for the credentials) is not recognized as a trusted certificate, so we install it separately on our computers. The problem is that I don't take my laptop with me often to university, so I usually want to connect using my HTC Magic, but I have no clue on how to install the certificate separately on Android, it is always rejected. [Edit2] : this is what is stated in their website Need for installation of official certificates CyberTrust validated by the CRU (http://www.cru.fr/wiki/scs/) The certificates contain information certified to generate encryption keys for data exchange, called "sensitive" as the password of a user. By connecting to CanalIP-UPMC, for example, the user must validate the identity of the server accepting the certificate appears on the screen in a "popup window". In reality, the user is unable to validate a certificate knowing, because a simple visual check of the license is impossible. Therefore, the certificates of the certification authority (CRU-Cybertrust Educationnal-ca.ca Cybertrust and-global-root-ca.ca) must be installed prior to the browser for the validity of the certificate server can be controlled automatically. Before you connect to the network-UPMC CanalIP you must register in your browser through the certification authority Cybertrust-Educationnal-ca.ca Download the Cybertrust-Educationnal-ca.ca, depending on your browser and select the link below : With Internet Explorer, click on the link following. With Firefox, click on the link following. With Safari, click the link following. If this procedure is not respected, a real risk is incurred by the user: that of being robbed password LDAP directory UPMC. A malicious server may in fact try very easily attack type "man-in-the-middle" by posing as the legitimate server at UPMC. The theft of a password allows the attacker to steal an identity for transactions over the Internet can engage the responsibility of the user trapped ... This is their website : http://www.canalip.upmc.fr/doc/Default.htm (in French, Google-translate it :)) Anyone knows how to install a web certificate on Android?

    Read the article

  • Coldfusion on VPS, how much JVM heap memory?

    - by Steven Filipowicz
    Recently I got a VPS server and I'm running Coldfusion, the website was running fine until it got more and more traffic and I started to encounter 'OutOfMemory' exceptions. I thought simply to rise the memory of the VPS server, but this didn't help. After doing some Google searches I found a setting in de CF Admin settings to set the JVM Heap memory. It was on the standard: Max Heap size 512MB and Min Heap size was empty. After playing around a bit I have now set it to Min 50MB and Max 200MB, good things is that I'm not getting the 'OutOfMemory' exceptions anymore. So far so good! But with about 50 active visitors on the website, the website starts to get slow. The CPU usage is only about 8% (Windows Taskmanager), also the taskmanager show only about 30% of the 3GB RAM in use. So I'm thinking that my values could be tweaked to use more of the RAM. Honestly I don't understand these JVM Memory heap settings, so I have no clue what is a good setting for me. I found a CF script that displays the memory usage, the details are: Heap Memory Usage - Committed 194 MB Heap Memory Usage - Initial 50.0 MB Heap Memory Usage - Max 194 MB Heap Memory Usage - Used 163 MB JVM - Free Memory 31.2 MB JVM - Max Memory 194 MB JVM - Total Memory 194 MB JVM - Used Memory 163 MB Memory Pool - Code Cache - Used 13.0 MB Memory Pool - PS Eden Space - Used 6.75 MB Memory Pool - PS Old Gen - Used 155 MB Memory Pool - PS Perm Gen - Used 64.2 MB Memory Pool - PS Survivor Space - Used 1.07 MB Non-Heap Memory Usage - Committed 77.4 MB Non-Heap Memory Usage - Initial 18.3 MB Non-Heap Memory Usage - Max 240 MB Non-Heap Memory Usage - Used 77.2 MB Free Allocated Memory: 30mb Total Memory Allocated: 194mb Max Memory Available to JVM: 194mb % of Free Allocated Memory: 16% % of Available Memory Allocated: 100% My JVM arguments are: -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC - Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib Can I give the JVM more memory? If so, what settings should I use? Thanks very much!!

    Read the article

  • Can't connect to computer via SBS2011 RWA

    - by sbrattla
    I've got an SBS 2011 Essentials server. Users a able to log on to Remote Web Access using their username and password. However, the trouble starts when a users attempts to log on remotely to his/her computer from the Remote Web Access website. When the user clicks on his/her computer (in the RWA website), the user is first presented with a window listing Publisher, Type, Remote Computer name and Gateway Server. Everything seems fine here, and the user clicks Connect. The user credentials are provided, and a connection is attempted. However, the logon attempt always fails with the message "The logon attempt failed". The logon attempt always generates three log events in the server log: EventId: 4672 - Special Logon EventId: 4624 - Logon EventId: 4634 - Logoff All events happens have the same timestamp. No events are logged on the client machine which the user attempts to log on to. Others have solved this by going to their IIS server and enable "Windows Authentication" for Rpc and RpcWithCert (in Default Web Site). However, this is in place on the server. I've also got RD CAPs and RD RAPs in place. As a side note; if i try to connect to any of the machines using the Remote Desktop Connection using the "Connect from anywhere" functionality - then things work flawlessly! In other words, the error only occurs when attempting to login to a computer via the Remote Web Access website. I've run out of ideas for how I can solve this (too many hours spent). Any ideas highly appreciated!

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • How do I access the web server on my desktop from my laptop?

    - by Steven
    I'm running Apache on my stationary and I would like to access a website through my laptop. This is some of the Apache config: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerName mysite.com DocumentRoot I:/wamp/www/mysite/ </VirtualHost> ServerName localhost:80 <Directory /> Options FollowSymLinks AllowOverride all Order deny,allow Deny from all </Directory> On my laptop I've added the following to the HOSTS file: 10.0.0.3 mysite.com But accessing the page through mysite.com is not very successfull. If I enter the IP address directly, I only get a Forbidden message. What do I need to do in order to get this to work? Update I'm runing WAMPSERVER 2.1 (Apache 2.2.17) Apache is up and running I can ping 10.0.0.3 from laptop I'm not able to ping http://mysite.com from laptop IE gives me a 403 Forbidden - The website declined to show this webpage The only log that get's entries when trying to access the website from my laptop, is access.log. access.log 10.0.0.4 - - [13/Jun/2011:10:14:04 +0200] "GET / HTTP/1.1" 403 202 apache_error.log [Mon Jun 13 10:08:16 2011] [error] VirtualHost localhost:0 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results UPDATE 2 My apache config has the following entry: AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 Could it be that this Allow from is stopping other computers accessing the page?

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • Different nginx rules based on referrer

    - by juana
    I'm using WordPress with WP Super Cache. I want visitors who come from Google (That inlcudes all country specific referrers like google.co.in, google.co.uk and etc.) to see uncached contents. There are my nginx rules which are not working the way I want: server { server_name website.com; location / { root /var/www/html/website.com; index index.php; if ($http_referer ~* (www.google.com|www.google.co) ) { rewrite . /index.php break; } if (-f $request_filename) { break; } set $supercache_file ''; set $supercache_uri $request_uri; if ($request_method = POST) { set $supercache_uri ''; } if ($query_string) { set $supercache_uri ''; } if ($http_cookie ~* "comment_author_|wordpress|wp-postpass_" ) { set $supercache_uri ''; } if ($supercache_uri ~ ^(.+)$) { set $supercache_file /wp-content/cache/supercache/$http_host/$1index.html; } if (-f $document_root$supercache_file) { rewrite ^(.*)$ $supercache_file break; } if (!-e $request_filename) { rewrite . /index.php last; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/website.com$fastcgi_script_name; include fastcgi_params; } } What should I do to achieve my goal?

    Read the article

  • does my machine configuration make sense?

    - by user1227914
    i couldn't think of a better place to ask this question, so here it goes. we're putting together a dedicated server for a website that will initially host the web server and the mysql database. as the website grows, we'll move the database to a different server and this machine will eventually only server the actual website. so the question is ...does my configuration look okay? it's the first time i'm building a server from scratch so i want to make sure i don't combine components that don't fit or something. things like ..do the drives i picked work for the hot swap ..etc. what do you guys think? am i good to go with this configuration? :) Chassis: Supermicro SuperServer 6016T-MTHF (6x DDR3 SDRAM - ECC DIMM 240-pin, 2x LGA1366 Socket, Power Provided: 600 Watt, 4 (free) x hot-swap - 3.5") CPU: Intel BX80614E5620 Xeon E5620 Processor - 4 Core, 2.40GHz, LGA 1366, 5.86GT/s QPI 12MB Cache, 64-Bit, 80W, HyperThreading Memory: Crucial CT51272BB1339 4GB PC10600 DDR3 Memory - 1333MHz, ECC, Registered, 1x4096MB (possibly 3 or 4 of them) Hard Drives: Western Digital WD2002FAEX Caviar Black Hard Drive - 2TB, 3.5", SATA 6Gbps, 7200 RPM, 64MB (possibly 2 or 3). thank you very much for any professional advice :)

    Read the article

  • Intermittent CNAME forwarding

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. At all times, I can connect to my website through the no-ip domain without issue. For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of today, however, the GoDaddy domain only works for about 10 minutes at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have attempted to run tests using a couple of online DNS check websites, but have not gotten any errors at any time. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. I would like advice on how I could debug/resolve this issue. Since the problem appeared without me changing anything on my end, I hope it will resolve itself, but knowing the cause in case it happens again would be preferable. EDIT: I changed the configuration in GoDaddy to create an A (Host) that points at my current IP. This works fine, so I can access the site through the GoDaddy domain without the preceding www. I am currently waiting for a new CNAME record to propagate that points the www subdomain at the main host, rather than my no-ip domain.

    Read the article

  • Web Server Routing Based On Location

    - by Eric
    I have a website that has users from both Hong Kong and Australia. Unfortunately, since the server is located in Australia, users from Hong Kong are going to suffer latency problems. Traffic has to go through US before travelling back to Australia. So I've setup a server in Hong Kong as well, and users using the .hk TLD are going to be redirected to the Hong Kong web server. It shares the same database server with the Australian server but due to aggressive SQL query caching, impact on performance from latency from SQL queries are negligible. But for users accustomed to the Hong Kong website but have since traveled to Australia, they suffer from additional latency because they go to the .hk site which redirects to the HK server even when they're in Australia. The website is targeted at international students from Hong Kong so this is an significant issue for me. Instead of redirecting users to the closest web server based on the TLD, how do I redirect users based on their location? Currently I am using nginx, postgres and Django. Say I know how to estimate users' location based on users' IP addresses, what is my next step? At what level would I work on? What topic should I read up?

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • Apache and linux file permissions

    - by morpheous
    I recently moved a Symfony 1.3.2 website (a PHP web framework), from a windows machine to Linux (Ubuntu 9.10). Ever since then, I have had all kinds of problems involving file permission (even though the app run without any of these problems on windows). I run symfony fix-perms which applied a 777 mask to the web directory (presumably, including its sub folders) - (as an aside) I think that is a potential security hole ... I have been meaning to come in here to ask how to correctly set permissions. Currently, when attempting to save a file from my website, I am getting the following error: PHP Warning: imagejpeg() [0function.imagejpeg0]: Unable to open '/home/morpheous/work/webdev/frameworks/symfony/sites/project1/web/uploads/../images/thumbnail/959cd604cf6115014a3703bef5a50486a5520642.jpg' for writing: Permission denied in /home/morpheous/work/webdev/frameworks/symfony/sites/project1/apps/frontend/lib Here are the permissions on the folders: web drwxr-xr-x 16 morpheous morpheous 4096 2010-02-24 21:01 web web/uploads/../images drwxr-xr-x 13 morpheous morpheous 12288 2010-04-09 15:25 images web/uploads/../images/thumbnail drwxr-xr-x 3 morpheous morpheous 4096 2010-02-24 20:44 thumbnail Can someone kindly tell me how to set the permissions so that my website (presumably running as the Apache daemon) can write the files to the directory required above?

    Read the article

  • Does NetworkSolutions have a good DNS service?

    - by joxl
    I'm recovering from a DNS disaster and I need some good advice on an alternate solution. My company owns a domain name through NetworkSolutions. Our website is hosted by another company who also maintains our DNS records. Our email is hosted by Google Apps, and the MX records are maintained through the afore-mentioned website/DNS host. Yesterday our website/DNS host had a serious hiccup in some software and completely overwrote all of our DNS records with invalid values; successfully pointing our domain and MX records at the wrong servers. Unfortunately it wasn't caught until it had time to significantly propagate. On top of that, it wasn't fixed until several hours later, combine that with a long TTL on the records; we have customers who are still bouncing emails. Anyhow, I am now completely terrified of this company's ability to do a good job, so I am considering switching to NetworkSolutions for our DNS service. I need the ability to configure A, CNAME, MX, and TXT records, preferably with a nice user interface (our current provider has a poor UI and doesn't support TXT records). Is NetworkSolutions a recommended DNS host? I am a little biased in their direction because the service will be free since we already pay them for our domain name. However I'm curious what others have experienced with their service.

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >