Search Results

Search found 37739 results on 1510 pages for 'static site'.

Page 243/1510 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Django | Apache | Deploy website behind SSL

    - by planet260
    So here are my requirements. I have a website built in Django. I deployed it on Apache Ubuntu. Before there was no SSL involved so the deployment was pretty simple. But now the requirements are changed. Now I have to take a few actions like signup and login behind SSL and present the admin panel and other normally via HTTP. By following the this tutorial I have set-up Apache and SSL and generated certificates for SSL communication. But I am not sure how to proceed, ie. how to serve only a few of my actions through SSL. Below is my configuration. The normal actions are working fine but I don't know how to configure SSL calls. WSGIScriptAlias / /home/ubuntu/myproject/src/myproject/wsgi.py WSGIPythonPath /home/ubuntu/myproject/src <VirtualHost *:80> ServerName mydomain.com <Directory /home/ubuntu/myproject/src/myproject> <Files wsgi.py> order deny,allow Allow from all </Files> </Directory> Alias /static/admin/ "/home/ubuntu/myproject/src/static/admin/" <Directory "/home/ubuntu/myproject/src/static/admin/"> Order allow,deny Options Indexes Allow from all IndexOptions FancyIndexing </Directory> <Location "/login"> RewriteEngine on RewriteRule /admin(.*)$ https://mydomain.com/login$1 [L,R=301] </Location> </VirtualHost> <VirtualHost *:443> ServerName mydomain.com SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key <Directory /home/ubuntu/myproject/src/myproject> <Files wsgi.py> order deny,allow Allow from all </Files> </Directory> Alias /static/admin/ "/home/ubuntu/myproject/src/static/admin/" <Directory "/home/ubuntu/myproject/src/static/admin/"> Order allow,deny Options Indexes Allow from all IndexOptions FancyIndexing </Directory> </VirtualHost> Can you please help me out on how to achieve this? What am I doing wrong? I have read a lot of tutorials but honestly I am not really good at configurations. Any help is appreciated.

    Read the article

  • Bridging networks problems

    - by Eric
    In my setup I have 3 computers and 2 (wireless d-link) routers. Computer1 has ethernet and wireless interfaces ethernet : 192.168.0.x (DHCP) wireless : 192.168.10.254 (static) Computer 2 has ethernet with two ips ethernet1 : 192.168.0.90 (static) ethernet2 : 192.168.10.110 (static) Computer 3 is a particular device with a hardcoded ip that I can't change wireless : 192.168.10.41 (static) Router1 manages internet and DHCP for network 192.168.0.0/24 Router2 is more complicated. I don't use DHCP. I use it to bridge between both networks. Its static ip is 192.168.10.1 Computer1 can ping Computer2. Computer1 can ping Computer3. Computer1 can ping Router1. Computer1 cannot ping Router2. Computer2 cannot ping Computer3. Computer2 can ping Router2. Router1 can ping Router1 Router2 can ping Computer2 Router2 cannot ping Computer1 Router2 cannot ping Computer3 This is very weird. Router2 manages the wireless connection, it should be able to ping its own computers right? My question is obviously : How can I make it so Computer2 can access everything else. This is a traditional case of "it was working before christmas and now it doesn't". The ethernet wiring is as follow : [ Computer1 ]----[ Router1 ]---[ Router2 ]---[ Computer3 ] I am using switch (lan) ports on Router1/2.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Routing / binding 128 to one server

    - by Andrew
    I have a Ubuntu server with 128 ip's (static external ips 86.xx.xx.16), and I want to crawl pages thru different ip's. The gateway is xx.xxx.xxx.1, the main ip is xx.xxx.xxx.16, and the other 128 ip's are xx.xxx.xxx.129/255. I tried this configuration in /etc/network/interfaces but I doesn't work. It work if I remove the gateway for the aliases eth0:0 and eth0:1. I think this is routing problem. auto lo iface lo inet loopback auto eth0 auto eth0:0 auto eth0:1 iface eth0 inet static address xx.xxx.xxx.16 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:0 inet static address xx.xxx.xxx.129 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:1 inet static address xx.xxx.xxx.130 netmask 255.255.255.128 gateway xx.xxx.xxx.1 Also, please tell me how to "reset" every changes that I made in networking and routing. Thank you

    Read the article

  • mod_rewrite hide subdirectory in return url part2

    - by user64790
    Hi I am having an issue trying to get my mod_rewrite configuration correctly i have a site: 0.0.0.0/oldname/directories/index.php I would like to rename "oldname" to "newname" resulting in: 0.0.0.0/newname/directories/index.php etc.. So when a user navigates to 0.0.0.0 my site will automatically send them to 0.0.0.0/oldname/index.php I'm not planning on moving my content marketing have asked me to rename the site folder I would like to mask the request of 0.0.0.0/oldname/index.php to 0.0.0.0/newname/index.php Also if a user navigates from index.php to an link of say /oldname/project1/index.Php the final browsers returned URL will be /newname/project1.php without having to move or edit site links. I also understand my hyperlinks will refer to /oldname but this is acceptable any help would be highly appreciated. Regards

    Read the article

  • Google Analytics Not tracking data correctly IP-address issue?

    - by PaperThick
    I have developed a small site for a client and the site has been placed inside a <iframe> at the clients site. The GA-script I'm using looks like this: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-2'], //My company's GA-account ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-1'], // Test GA-account ['b._trackPageview'], ['th._setAccount', 'UA-XXXXXXX-3'], ['th._setDomainName', '.clientdomain.se'], // Client GA-account ['th._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </head> As you can see I report the GA pageviews to the client as well. The GA script is tracking visitors and pageviews at both ends. But the problem is that on my clients side the visitor-count is more than double what they are on my end (20 000 vs 5 000). At first I thought that it was being duplicated at some point but when I checked my Crazy-Egg account I saw that it had tracked over 10 000 visits and then stopped tracking because that was my limit on the account. The page my site is on is on a IP-address (http://XXX.XXX.XX.X/campaign/) and not on a "valid url". Could that be an issue why some of the visitors isn't beeing tracked? Thanks in advance

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Setting up a subdomain on IIS7

    - by EvanGWatkins
    I have a dev server for my C# Web Application and to access the dev site I go to the server name in my browser (lfi-fsxmv06) and I can access my web application. Now I want to set up a subdomain of that (test.lfi-fsxmv06) is this possible? My bindings on the dev site (lfi-fsxmv06) are http with port 80 and ip address *, and the hostname is blank. My bindings on the subdomain site are http, port 80, IP adddress *, and the hostname is test.lfi-fsxmv06 however if I try t

    Read the article

  • Issue updating domain name servers from BlueHost to AWS

    - by cowls
    I am trying to migrate my site hosting from bluehost to AWS cloud based service. I have the site up and running on AWS with an elastic IP configured, it loads fine when I specify the IP address in the browser. I have gone into Route 53 on the AWS console and created a "hosted zone" for the domain. I then created a new record set of type "A" using the IP address as the value. I have a domain name registered with bluehost. Ive logged into the bluehost account and updated the domain name servers to point to those specified in Route 53 in the AWS console. When I hit the IP address directly the site loads, however it doesn't load when using the domain name (I get a google chrome oops error page saying page is not found) I've tried using this site: http://dns.squish.net/ to debug but it seems to be giving me the correct results. fizaclegems.com 300 IN A 107.20.209.78 Where 107.20.209.78 matches the elastic IP configured in the AWS console. This is the result it gives for all 4 name servers. Am I missing a step here? Does anyone know what else I should be doing or looking for?

    Read the article

  • IIS configuration to publish files

    - by Andy.l
    I have a web service that will save a file that will be published externally through IIS. The idea was to use Webdav to save the file, but that would mean that the file could be altered externally as well. The idea is to have 2 website on the IIS server that I publish the file from. One site http://internalpublish.local/vfolder where vfolder points to a file share where the file would be saved through webdav. The other site would be http://externalpublish.com/vfolder where vfolder points to the same physical folder as on the internal site, but webdav is NOT enabled on this site. Would this cause any issues? Any feedback would be gratefully appreciated. /Andy.l

    Read the article

  • OpenVPN Chaining

    - by noderunner
    I'm trying to set up an OpenVPN "chain", similar to what is described here. I have two separate networks, A and B. Each network has an OpenVPN server using a standard "road warrior" or "client/server" approach. A client can connect to either one for access to the hosts/services on that respective network. But server A and B are also connected to each other. The servers on each network have a "site-to-site" connection between the two. What I'm trying to accomplish, is the ability to connect to network A as a client, and then make connections with hosts on network B. I'm using tun/routing for all of the VPN connections. The "chain" looks something like this: [Client] --- [Server A] --- [Server A] --- [Server B] --- [Server B] --- [Host B] (tun0) (tun0) (tun1) (tun0) (eth0) (eth0) The whole idea is that server A should route traffic destined to network B through the "site-to-site" VPN set up on tun1 when a client from tun0 tries to connect. I did this simply by setting up two connection profiles on server A. One profile is a standard server config running on tun0, defining a virtual client network, IP address pool, pushing routes, etc. The other is a client connection to Server B running on tun1. With ip_forwarding enabled, I then simply added a "push route" to the clients advertising a route to network B. On server A, this seems to work when I look at tcpdump output. If I connect as a client, and then ping a host on network B, I can see the traffic getting passed from tun0 to tun1 on Server A: tcpdump -nSi tun1 icmp The weird thing is that I don't see Server B receiving that traffic through the tunnel. It's as if Server A is sending it through the site-to-site connection like it should, but server B is completely ignoring it. When I look for the traffic on Server B, it simply isn't there. A ping from Server A -- Host B works fine. But a ping from a client connected to Server A to host B does not. I'm wondering if Server B is ignoring the traffic because the source IP does not match the client IP pool that it hands out to clients? Does anyone know if I need to do something on Server B in order for it to see the traffic? This is a complicated problem to explain, so thanks if you stuck with me this far.

    Read the article

  • WHMCS on main website [closed]

    - by Miskone
    I ordered a Dedicated Root Server on Hetzner. I have WHM and cPanel and now I want to make front site for histing company where users can control their accounts, nameservers of registered domain, register new domains and new hosting accounts. My question is: Do I need to buy WHMCS for that or something else. I need to put client zone on front site and I want to find the best solution for that. Sorry if this is not topic for this site, but I just don't know where can I ask this question :(

    Read the article

  • php returns junk characters at end of everything

    - by blindJesse
    php appears to be adding junk characters to the end of everything it returns on a friend's site. I'm not an admin on the server but I'd like to give an informed complaint to get this fixed. The site is http://daytoncodebreakers.org. You can see some junk at the end of every page on the site (what appear to be question marks with something else in the middle). I originally thought this was a wordpress issue, but check out http://daytoncodebreakers.org/whereisini.php (which is just a call to phpinfo), and http://daytoncodebreakers.org/hello.php (which is just 'Hello World'). I'm not sure if this is the most appropriate site, but I think this is a server config issue, so I'm posting it here (rather than stackoverflow or superuser). Feel free to move it if want.

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • which user is the website host

    - by Kossel
    I m learning about server, and I'm configuring nginx mysql php wordpress. the server distro is debian 6. I created a new user and I wish each user is the owner of the site folder /var/www/site.one so I chown -R kossel:kossel site.one my problem is, my wordpress only work if I chmod 644 wp-config.php, which all can read wordpress site suggest that file should be 640. and my question is: when someone open mydomain.com, wordpress has to access wp-config.php file, but which user is it actually using to "read" that file? root? user kossel? anyone else? how can I properly give it permission or owner??

    Read the article

  • Why use hosts file?

    - by dK3
    My company has a staging site and we access it by a url like this: www.example.com Until today I did not realise that I had a line in my hosts file which said: 192.0.2.0 www.example.com (the ip is fake here) now when I try to access this site through ip , I cannot get access.. why is this the case? We even own the domain we are using so I do not see the reason why we are using a hosts file and more over, why I cannot access the site through simple IP (by the way, we are using an internal IP)

    Read the article

  • Two sites running same code and different config files?

    - by Gen
    I have a Windows 2008R2 server with IIS, running one site (ASP.NET4.5). All the parameters are written in web.config file. I have to add a new site, that will run on the same code (same root folder) as the first one, but will read parameters like sql connection strings etc. from its own config file, not from first site web.config. How can I do that? Is it possible to run the second site in different app pool? Both sites will run the same .NET version of course. Thank you!

    Read the article

  • IIS 6.0 https not working "connection was reset"

    - by cad
    Application Server Windows Server 2003 SP2 with IIS 6.0 IIS has a "Default Web Site" (port 18000, ssl 443, ID=1) with a certificate created by me. I have an specific site called "scj.galaxy.Weekly" (port 80, ssl 443, ID=1272369728) that is working fine. I have an entry in windows/system32/drivers/etc/hosts that links galaxy.Weekly.scjdev.ds to the server ip in both my local machine and in the application Server. These sites works: http://scj.galaxy.weekly/test.html works http://scj.galaxy.weekly/test.aspx works But https://scj.galaxy.weekly/test.html fails Error message is: The connection was reset The connection to the server was reset while the page was loading. The certificate was working fine for months. It was created with something similar to this: Selfssl /N:CN=*.scjdev.ds /V:3650 /S:1 /P:443 I have tried several options and none of them are working: 1) Create a certificate only in "Default Web Site" and link it to SecureBindings with command prompt cscript adsutil.vbs set /w3svc/1272369728/SecureBindings ":443:galaxy.Weekly.scjdev.ds" 2) Create a certificate only in "Galaxy Site" and link it to SecureBindings 3) Create a certificate in both and link them to secureBindings. Probably I am missing an step or something, but I can't see it. Here is the relevant config of Galaxy Site: <IIsWebServer Location ="/LM/W3SVC/1272369729" AuthFlags="0" LogPluginClsid="{FF160663-DE82-11CF-BC0A-00AA006111E0}" SSLCertHash="c36a514a0be90fbc121d9c19bb052842289d5aee" SSLStoreName="MY" SecureBindings=":443:galaxy.Weekly.scjdev.ds" ServerAutoStart="TRUE" ServerBindings=":80:galaxy.Weekly.scjdev.ds" ServerComment="galaxy.Weekly.scjdev.ds" > </IIsWebServer> <IIsWebVirtualDir Location ="/LM/W3SVC/1272369729/root" AccessFlags="AccessRead | AccessScript" AppFriendlyName="Default Application" AppIsolated="2" AppRoot="/LM/W3SVC/1272369729/Root" AuthFlags="AuthAnonymous | AuthNTLM" DefaultDoc="Default.aspx" DirBrowseFlags="EnableDirBrowsing | DirBrowseShowDate | DirBrowseShowTime | DirBrowseShowSize | DirBrowseShowExtension | DirBrowseShowLongDate" Path="D:\Webs\Galaxysite" ScriptMaps="some config... " > </IIsWebVirtualDir>

    Read the article

  • Best practice- handling images on website

    - by Steve
    I am porting an old eCommerce site to MVC 3 and would like to take advantage of design improvements. The site currently has product images stored in 3 sizes: thumbnail, medium (for display in a list) and expanded for a zoomed look. Right now we are having to upload 3 separate images that are sized exactly right, provide 3 different names that match what the site expects, etc., it is a pain. I'd like to upload just 1 file, the large one, then let the site reduce it to needed sizes, and I'd like the flexibility to change the thumbnail and list sizes depending on user preferences, form factor (e.g. mobile, iPad, desktop), etc. so might need many copies of the same image. My question is should the image be reduced then saved several times upon upload and if so what is a good storage/naming convention? The other idea is to store just the single image but resize it programmatically before serving it to the client. Has anybody done this and what are the tradeoffs besides a few more machine cycles? How do you pass a temporary image in memory to the client (there is no URL)?

    Read the article

  • Is it possible to use two different ErrorDocuments for different paths of a website?

    - by tapwater
    this is my first question on stackexchange and it might be a bit confusing. I currently run PmWiki (sorry, you'll have to google it, new user can only have 2 hyperlinks) at mydomain.com/pmwiki. I have a 404 page and .htaccess set up in my site's document root for 404 pages regarding anything that doesn't have to do with my wiki. By default, PmWiki handles URLs a little confusingly so I had to use this in order to get it to look like mydomain.com/pmwiki/Namespace/Page I had to create a .htaccess in /pmwiki to remove parts of the URL. PmWiki also has a custom 404 (Site/PageNotFound) page that has stopped working, now my site uses the /404.htm page. I noticed this when trying to install this "recipe" to enable case-insensitive URLs. Currently the only way to access Site/PageNotFound is by actually linking to it, and, if you read how that recipe seems to function, this is an issue. Currently mydomain.com/pmwiki/blahblah and mydomain.com/pmwiki/legitimate_namespace_but_lowercase/legitimate_lowercase_page_name both direct to mydomain.com/404.htm. I have to admit I'm very confused, and I apologize if I was unclear in any of this, but I could definitely use some help. Thanks!

    Read the article

  • Lost access to the unity interface how to fix? (ubuntu 11.10)

    - by Tal Galili
    o.k, this is embarrassing: I have installed Compiz Config Settings Manager and tried to fix it so that the transition time between changing tabs (using alt+tab) will be short. by accident I un-pressed V from something else, and it asked me about a conflict - I pressed the "x" button to close the window and as a result I stopped seeing the unity interface. That is - I can not see any buttons of the left side. I went to the terminal (ctrl+alt+F1) and ran ccsm As a result I got the following error: $ ccsm /usr/lib/python2.7/site-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display warnings.warn(str(e), _gtk.Warning) Traceback (most recent call last): File "/usr/bin/ccsm", line 93, in <module> import ccm File "/usr/lib/python2.7/site-packages/ccm/__init__.py", line 1, in <module> from ccm.Conflicts import * File "/usr/lib/python2.7/site-packages/ccm/Conflicts.py", line 26, in <module> from ccm.Constants import * File "/usr/lib/python2.7/site-packages/ccm/Constants.py", line 29, in <module> CurrentScreenNum = gtk.gdk.display_get_default().get_default_screen().get_number() AttributeError: 'NoneType' object has no attribute 'get_default_screen' What should I do next? Thanks.

    Read the article

  • Google Sites (via Apps) setup questions

    - by Dave
    I thought that it would be a piece of cake to set up a Google site via Google Apps, but perhaps my previous (limited) experience with web development has given me unrealistic expectations. I have actually had a really tough time finding help with the exact question that I have, which is: How do I change the home page contents??? You see, I'm used to having hosting with someone like GoDaddy, where I can just ftp in and drop my HTML files in the www folder. From research I have found that this is simply not possible with any flavor of Google Sites. That's fine, I can live with it. So let's say I have www.mydomain.com. When I hit that URL, it redirects me to a very long URL (unfortunately) like https://sites.google.com/a/mydomain.com/sites/system/app/pages/meta/domainWelcome, which just says: Google Apps Welcome to mydomain.com If you are the domain administrator get started creating your home page with Google Sites Great! I want to do that. So I click on the "If you are the..." link and end up at a screen where I can choose a template, a name, and some visibility options. If I click on My Sites, there isn't a "default" site, i.e. the one that www.mydomain.com displays. I figured that maybe I just have to create a site first, so I went ahead and did that. My first test was to create a site that was publicly accessible. I thought that maybe if I did that, the Google would decide that this must be my home page since it's the only one. But it doesn't, and I still get the "Welcome to" page. Under "More Actions", I didn't see anything interesting except for "Manage site". I went in there and had a peek around, and didn't see anything about using this as the default home page. Am I looking for something that just doesn't exist? I can't believe there isn't a way to modify the "domain welcome to" page...

    Read the article

  • Which open source/free CMSs allow for staging content changes before putting live?

    - by elliot100
    I'm not sure that I've phrased the question all that well. What I'm really looking for is a feature of CMSs where content changes are made on a restricted access 'staging/preview' site, before being published to the live external site. The open source/free CMSs I've looked at so far (Textpattern, WordPress, Movable Type) don't seem to allow this, as far as I can see. Although they allow new content to be saved as draft/pending, viewable by users with appropriate privileges, this doesn't work with changes to existing content -- a post/page can't be live and also have a new version pending. (Do correct me if I'm wrong). I realise it should be possible to do this by making all changes on a staging site, and then replicating the contents of that database to a separate live site manually, but am looking for something a little more elegant. Edit: Just to clarify, both systems which involve synchronising a live database with a staging database systems which offer live/staging views of a single database would be of interest. Am sure I have seen both approaches in commercial/proprietary CMSs.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >