Search Results

Search found 115 results on 5 pages for 'virtualenv'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Tips/Process for web-development using Django in a small team

    - by Mridang Agarwalla
    We're developing a web app uing Django and we're a small team of 3-4 programmers — some doing the UI stuff and some doing the Backend stuff. I'd love some tips and suggestions from the people here. This is out current setup: We're using Git as as our SCM tool and following this branching model. We're following the PEP8 for your style guide. Agile is our software development methodology and we're using Jira for that. We're using the Confluence plugin for Jira for documentation and I'm going to be writing a script that also dumps the PyDocs into Confluence. We're using virtualenv for sandboxing We're using zc.buildout for building This is whatever I can think of off the top of my head. Any other suggestions/tips would be welcome. I feel that we have a pretty good set up but I'm also confident that we could do more. Thanks.

    Read the article

  • Tips/Process for web-development using Django in a small team

    - by Mridang Agarwalla
    We're developing a web app uing Django and we're a small team of 3-4 programmers — some doing the UI stuff and some doing the Backend stuff. I'd love some tips and suggestions from the people here. This is out current setup: We're using Git as as our SCM tool and following this branching model. We're following the PEP8 for your style guide. Agile is our software development methodology and we're using Jira for that. We're using the Confluence plugin for Jira for documentation and I'm going to be writing a script that also dumps the PyDocs into Confluence. We're using virtualenv for sandboxing We're using zc.buildout for building This is whatever I can think of off the top of my head. Any other suggestions/tips would be welcome. I feel that we have a pretty good set up but I'm also confident that we could do more. Thanks.

    Read the article

  • pip: dealing with multiple Python versions?

    - by David Wolever
    Is there any way to make pip play well with multiple versions of Python? For example, I want to use pip to explicitly install things to either my site 2.5 installation or my site 2.6 installation. For example, with easy_install, I use easy_install-2.{5,6}. And, yes — I know about virtualenv, and no — it's not a solution to this particular problem.

    Read the article

  • Best deployment strategy for Python google app engine

    - by sushant
    I wonder if there are any best practices/patterns for deploying python apps on Google app engine specifically Django. The best practice should be combination of existing best practices viz. Fabric, Paver, Buildout etc. Also please share best practice patterns for developing (I could not get virtualenv running with Django and Django App engine helper)

    Read the article

  • So now that Django 1.2 is officially out...

    - by Fedor
    Since I have Django 1.1x on my Debian setup - how can I use virtualenv or similar and not have it mess up my system's default django version which in turn would break all my sites? Detailed instructions or a great tutorial link would very much be appreciated - please don't offer vague advice since I'm still a noob. Currently I store all my django projects in ~/django-sites and I am using Apache2 + mod_wsgi to deploy.

    Read the article

  • How to build and deploy Python web applications

    - by sverrejoh
    I have a Python web application consisting of several Python packages. What is the best way of building and deploying this to the servers? Currently I'm deploying the packages with Capistrano, installing the packages into a virtualenv with bash, and configuring the servers with puppet, but I would like to go for a more Python based solution. I've been looking a bit into zc.buildout, but it's not clear for me what I can/should use it for.

    Read the article

  • connecting to redis from wsgi application

    - by W_P
    I am running redis and httpd with mod_wsgi on the same server. I have a wsgi application using a virtualenv and the follow virtualhost conf: WSGIPythonPath /var/www/wsgi <VirtualHost *:80> ServerName test.example.com DocumentRoot /var/www/html WSGIScriptAlias / /var/www/wsgi/myapp/app.wsgi CustomLog logs/test.example.com-access-log common ErrorLog logs/test.example.com-error-log <Directory /var/www/wsgi/myapp> Order allow,deny Allow from all WSGIScriptReloading On </Directory> </VirtualHost> I have a sites.pth in my /var/www/wsgi directory, adding myapp to the PYTHONPATH, and this is my /var/www/wsgi/myapp/app.wsgi: activate_this = '/var/www/envs/myapp/bin/activate_this.py' execfile( activate_this, dict(__file__=activate_this) ) from myapp.application import app as application Sorry, I am trying to edit the question to have the rest of my info by it keeps timing out before the edits go through

    Read the article

  • Installed mountain lion, getting a python virtual env error?

    - by user27449
    I recently installed mountain lion (10.8) and when I open up my terminal I get this message: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named virtualenvwrapper.hook_loader virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/bin/python and that PATH is set properly. Before I try and fix this, I was hoping someone could guide me as I haven't touched python in a while and I don't want to mess up this installation.

    Read the article

  • deploying a Python application from a PHP developer

    - by user1218776
    I'm a little confused on the deployment process for Python. Let's say you create a brand new project with virtualenv source bin/activate pip install a few libraries write a simple hello world app pip freeze the dependencies When I deploy this code into a machine, do I need first make sure the machine is sourced before installing dependencies? I don't mean to sound like a total noob but in the PHP world, I don't have to worry about this because it's already part of the project. All the dependencies are registered with the autoloader in place. The steps would be: rsync the files (or any other method) source bin/activate pip install the dependencies from the pip freeze output file It feels awkward, or just wrong and very error prone. What are the correct steps to make? I've searched around but it seems many tutorials/articles make an assumption that anyone reading the article has past python experience (imo).

    Read the article

  • Which AMI should I use as a base for a Django application?

    - by Edan Maor
    I'm starting development of a Django application, on Amazon's Web Services. I'm looking to build an instance that will serve the Django. I don't have much experience with such things, having only used a shared host before (WebFaction). So I'm wondering, which AMI should I use as a base? I'm assuming I want an Ubuntu AMI, possibly with certain things like Apache pre-installed? One minor point: I'm planning to serve several different Django projects from the same instance. I use virtualenv on my dev machine right now to separate the different projects, I'm assuming I'll do the same on EC2. Thanks!

    Read the article

  • Packaging python software with custom dependencies

    - by viraptor
    Hi, I'm looking for a good way to package a Python application that is going to be deployed on a Debian server. The application itself depends on some modules which are not included in base Debian repository, although they might be in the future. This creates some problems... I depend on some patches to those modules. If the original module gets installed one day, the application will break. However if I install everything I need in a virtualenv just for that application, I lose the ability to upgrade Python itself (in case of security updates). The third option would be to rename my fork of the upstream module and just treat it as a completely separate one. But that would mean changing the code (not much work, but it wouldn't be that clean / universal anymore). Are there any other options that I missed? Are there any pros / cons I didn't see in the solutions above?

    Read the article

  • How to run django on localhost with nginx and uwsgi?

    - by user2426362
    How to run django on localhost with nginx and uwsgi? This im my config but not works. nginx: server { listen 80; server_name localhost; access_log /var/log/nginx/localhost_access.log; error_log /var/log/nginx/localhost_error.log; location / { uwsgi_pass unix:///tmp/localhost.sock; include uwsgi_params; } location /media/ { alias /home/user/projects/zt/myproject/myproject/media/; } location /static/ { alias /home/user/projects/zt/myproject/myproject/static/; } } uwsgi: [uwsgi] vhost = true plugins = python socket = /tmp/localhost.sock master = true enable-threads = true processes = 2 wsgi-file = /home/user/projects/zt/myproject/myproject/wsgi.py virtualenv = /home/user/projects/zt chdir = /home/user/projects/zt/myproject touch-reload = /home/user/projects/zt/myproject/reload This config work on my ubuntu server with normal domain (not localhost) but on localhost not working. If I run localhost in web browser I have Welcome to nginx!

    Read the article

  • How do I tell if a Python pip install is working correctly or hanging?

    - by wobbily_col
    I am trying to install the python pandas package in a virtualenv using pip. On my development machine it installed correctly, but now I am trying on the server, it gets so far then it seems to get stuck: warnings.warn(LapackSrcNotFoundError.__doc__) /apps/PYTHON/2.7.3/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) non-existing path in 'numpy/distutils': 'site.cfg' non-existing path in 'numpy/lib': 'benchmarks' Could not locate executable gfortran Could not locate executable f95 Found executable /apps/modules/wrappers/fortran/ifort Top shows ifort running at 46% cpu. Is there any way I can tell if this is working correctly (can I check files it is updating for example), or if it is stuck in a loop? It has been running for 40 minutes so far.

    Read the article

  • Installing both lxml 3.1.2 and lxml2 on ubuntu 12.04

    - by wgw
    I asked this on SO: http://stackoverflow.com/questions/19852911/lxml-3-1-2-and-lxml2-both-on-ubuntu/19856674#19856674 But it is perhaps more appropriate for AskUbuntu. So here it is again, reformulated. On the lxml site they suggest that it is possible to have both lxml2 and the newest version of lxml on ubuntu: Using lxml with python-libxml2 If you want to use lxml together with the official libxml2 Python bindings (maybe because one of your dependencies uses it), you must build lxml statically. Otherwise, the two packages will interfere in places where the libxml2 library requires global configuration, which can have any kind of effect from disappearing functionality to crashes in either of the two. To get a static build, either pass the --static-deps option to the setup.py script, or run pip with the STATIC_DEPS or STATICBUILD environment variable set to true, i.e. STATIC_DEPS=true pip install lxml The STATICBUILD environment variable is handled equivalently to the STATIC_DEPS variable, but is used by some other extension packages, too. I am generally confused about how pip packages and ubuntu packages get along, so I hesitate to run STATIC_DEPS=true pip install lxml. Will it damage/confuse my installed lxml2 package? The suggestion on SO was to install the new lxml in a virtualenv. That looks like the best way to go, but the lxml site is suggesting that a dual installation will work also. In general: what happens if I use pip (to get a newer install) for a package that is already installed by apt-get?

    Read the article

  • Installing datacommons from sunlight

    - by Newben
    I know strictly nothing from python and I am installing datacommons from the sunlightlabs. So I followed step by step the README.md https://github.com/sunlightlabs/datacommons First, it is said in the doc to add to the virtualenv dc_data, dc_matcchbox but I didn't find them. But I went to the final step to run ./manage.py runserver so I had the following message : (datacommons)newben@newben-VirtualBox:~/share-ubuntu/sunlightlabs-datacommons-e3ff1a3$ ./manage.py runserver fatal: Not a git repository (or any parent up to mount parent /home/mbenchoufi) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). Error: Can't find the file 'settings.py' in the directory containing './manage.py'. It appears you've customized things. You'll have to run django-admin.py, passing it your settings module. (If the file settings.py does indeed exist, it's causing an ImportError somehow.) In the 'sunlightlabs-datacommons-e3ff1a3' folder, I downloaded and put the files from github. By the way I didin't understand how to deal with the settings file. Could someone help me understand how to install datacommons ?

    Read the article

  • Django crash without message

    - by schneck
    Hi there, I have a Django-project which was running fine, until I made "some changes" I don't remember exactly. Since that point, on every request, my project crashes silently: $ ./manage.py runserver -v2 Validating models... 0 errors found Django version 1.1.1, using settings 'src.settings' Development server is running at http://127.0.0.1:8000/ Quit the server with CONTROL-C. After I requested a page (admin oder frontend), it returns to the prompt. I did not find any other option to get verbose output than -v2 - is there any logfile I can use? I'm using Django 1.1.1 on a Mac OS X 10.6 with virtualenv, Python 2.6 Thanks a lot.

    Read the article

  • Removing python and then re-installing on Mac OSX

    - by JudoWill
    I was wondering if anyone had tips on how to completely remove a python installation form Mac OSX (10.5.8) ... including virtual environments and its related binaries. Over the past few years I've completely messed up the installed site-packages, virtual-environments, etc. and the only way I can see to fix it is to just uninstall everything and re-install. I'd like to completely re-do everything and use virtualenv, pip, etc. from the beginning. On the other hand if anyone knows a way to do this without removing python and re-installing I'd be happy to here about it. Thanks, Will

    Read the article

  • Windows equivalent to this Makefile

    - by Sridhar Ratnakumar
    The advantage of writing a Makefile is that "make" is generally assumed to be present on the various Unices (Linux and Mac primarily). Now I have the following Makefile: PYTHON := python all: e installdeps e: virtualenv --distribute --python=${PYTHON} e installdeps: e/bin/python setup.py develop clean: rm -rf e As you can see this Makefile uses simple targets and variable substitution. Can this be achieved on Windows? By that mean - without having to install external tools (like cygwin make); perhaps make.cmd? Typing "make installdeps" for instance, should work both on Unix and Windows.

    Read the article

  • How to reset Scrapy parameters? (always running under same parameters)

    - by Jean Ventura
    I've been running my Scrapy project with a couple of accounts (the project scrapes a especific site that requieres login credentials), but no matter the parameters I set, it always runs with the same ones (same credentials). I'm running under virtualenv. Is there a variable or setting I'm missing? Edit: It seems that this problem is Twisted related. Even when I run: scrapy crawl -a user='user' -a password='pass' -o items.json -t json SpiderName I still get an error saying: ERROR: twisted.internet.error.ReactorNotRestartable And all the information I get, is the last 'succesful' run of the spider.

    Read the article

  • Django colon syntax in template tags: only in newer versions?

    - by Alan
    I just deployed an application to a new server, and although I'm using virtualenv, I had to install a new environment on the production server, which has a different architecture. Anyway, I received no TemplateSytaxErrors in development, but on the production server, I get: Exception Type: TemplateSyntaxError Exception Value: Caught SyntaxError while rendering: invalid syntax (views.py, line 25) The offending line is: {% url admin:password_change as password_change_url %} Upon removing that line, the TemplateSyntaxError hops to the next line that has a colon in it (and lets other template tags work fine). So my question is this: is there some discrepancy in versions of Python/Django that would allow or disallow the namespacing syntax? The template tags are in django-grappelli (http://code.google.com/p/django-grappelli/), so I'd rather not go through their code and rewrite all the template tags. Development server: 32-bit Debian Python 2.5.5 Django 1.2.1 Production server: 64-bit CentOS Python 2.4.3 Django 1.2.1 Any ideas?

    Read the article

  • How do I get PyLint to find namespace packages?

    - by tjd.rodgers
    I have a virtualenv where I've installed two packages, both using the company.project_name namespace. So the first package is importable from company.project_name.one and the second from company.project_name.two. The challenge is that I can't seem to be able to run PyLint on either one of them. If I issue: $ pylint company.project_name.one I get: ************* Module company.project_name.one F: 1, 0: No module named project_name.one(fatal) I suspect that I'm probably doing something wrong. Is there a proper way to do this? Edit: I should have made it clear that company.project_name and company are namespace packages and not regular packages.

    Read the article

  • How can I make a dashboard with all pending tasks using Celery?

    - by e-satis
    I want to have some place where I can watch all the pendings tasks. I'm not talking about the registered functions/classes as tasks, but the actual scheduled jobs for which I could display: name, task_id, eta, worker, etc. Using Celery 2.0.2 and djcelery, I found `inspect' in the documentation. I tried: from celery.task.control import inspect def get_scheduled_tasks(nodes=None): if nodes: i = inspect(nodes) else: i = inspect() scheduled_tasks = [] dump = i.scheduled() if dump: for worker, tasks in dump: for task in tasks: scheduled_task = {} scheduled_task.update(task["request"]) del task["request"] scheduled_task.update(task) scheduled_task["worker"] = worker scheduled_tasks.append(scheduled_task) return scheduled_tasks But it hangs forever on dump = i.scheduled(). Strange, because otherwise everything works. Using Ubuntu 10.04, django 1.0 and virtualenv.

    Read the article

  • On Mac OS X, do you use the shipped python or your own?

    - by The MYYN
    On Tiger, I used a custom python installation to evaluate newer versions and I did not have any problems with that*. Now Snow Leopard is a little more up-to-date and by default ships with $ ls /System/Library/Frameworks/Python.framework/Versions/ 2.3 2.5 2.6 @Current What could be considered best practice? Using the python shipped with Mac OS X or a custom compiled version in, say $HOME. Are there any advantages/disadvantages using the one option over the other? My setup was fairly simple so far and looked like this: Custom compiled Python in $HOME and a $PATH that would look into $HOME/bin first, and subsequently would use my private Python version. Also $PYTHONPATH pointed to this local installation. This way, I did not need to sudo–install packages - virtualenv took care of the rest.

    Read the article

  • Sharing a fabfile across multiple projects

    - by Matthew Rankin
    Fabric has become my deployment tool of choice both for deploying Django projects and for initially configuring Ubuntu slices. However, my current workflow with Fabric isn't very DRY, as I find myself: copying the fabfile.py from one Django project to another and modifying the fabfile.py as needed for each project (e.g., changing the webserver_restart task from Apache to Nginx, configuring the host and SSH port, etc.). One advantage of this workflow is that the fabfile.py becomes part of my Git repository, so between the fabfile.py and the pip requirements.txt, I have a recreateable virtualenv and deployment process. I want to keep this advantage, while becoming more DRY. It seems that I could improve my workflow by: being able to pip install the common tasks defined in the fabfile.py and having a fab_config file containing the host configuration information for each project and overriding any tasks as needed Any recommendations on how to increase the DRYness of my Fabric workflow?

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >