Search Results

Search found 73266 results on 2931 pages for 'django file upload'.

Page 295/2931 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Blank list of windows services

    - by Joe
    Recently when I open windows services (always as administrator) I get a blank list of services: When I try and click on one of the empty lines I get this "Script Error" message: This happens over and over again, after several times I restarted my computer. I can't pinpoint exactly when this started happening or if I made any specific changes to my computer at that time. Someone told my to try running scf /scannow as administrator, but when I try to do that the scan stops at 34% and I get the message: "Windows Resource Protection could not perform the requested operation." I am running Windows 7 Enterprise 64 bit, and I would really like to avoid reinstalling windows. Does anyone know how to fix this? Edit - Here is another attempt I made and some more information that might help: Following WhoIsRich's suggestion, I tried the command sfc /scannow /offbootdir=c:\ /offwindir=c:\windows. This gave the error message "The arguments passed to sfc are invalid. The offline windows directory specified points to the online system", and then I realized this command is meant to be run after booting from another system. Since I don't have my windows installation disk right now, I used my own system to create a recovery disk, and then restarted my computer and used the recovery disk to boot. I then ran the above command, and I got the following message: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.log". I then restarted my computer and let it boot up normally. The problem with windows services persists, and the CBS.log file is a long log file with many entries, and I don't know if there is useful information in it, and if there is, how to find it.

    Read the article

  • Organizing automatically Windows Files and Folders

    - by Kiquenet
    For Windows only, Organizing the eleventy-billion files you've got stuffed into folders on your hard drive is very "hard". For example, I have one folder on my computer that I save all web downloads to, regardless of file type, size or purpose. Many of the files are only temporary downloads, for instance setup files of applications that I test, demonstration videos that I watch once or documents that I want to read. Some files on the other hand are there to stay, and I used to move them out of the download folder manually in the past. Another files in folders in my computer: many source code, tests, programs, tools, ... I need tecnology for organize billion files. Which best tools for organize, sort, etc automatically your files-folders? Digital Janitor http://davidevitelaru.com/software/digital-janitor/ Belverede http://lifehacker.com/5510961/how-to-automatically-clean-and-organize-your-desktop-downloads-and-other-folders Download Mover http://www.neoteo.com/download-mover-reorganiza-tus-descargas-14188 File/Folder Date Organizer http://seedling.dcmembers.com/other/ffdorg.zip DropIt http://www.lupopensuite.com/db/dropit.htm Others issues about organization files, desktop, etc How to automate the process of organizing audio files on Windows Organizing My Windows Desktop What's a good way for organizing PDF documents on Windows? Folksonomy tagging for files What is your method of “folksonomy” tagging for files on your local machine?

    Read the article

  • Family server setup [closed]

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • Methods to transfer files from Windows server to linux server

    - by Raze2dust
    Hi, I need to transfer webserver-log-like-files containing periodically from windows production servers in the US to linux servers here in India. The files are ~4 MB in size each and I get about 1 file per minute. I can take about 5 mins lag between the files getting written in windows and them being available in the linux machines. I am a bit confused between the various options here as I am quite inexperienced in such design: I am thinking of writing a service in C#.NET which will periodically archive, compress and send them over to the linux machines. These files are pretty compressible. WinRAR can convert 32 MB of these files into a 1.2 MB archive. So that should solve the network transfer speed issue. But then how exactly do I transfer files to linux? I could mount linux drive on windows server using samba, or should I create an ftp server, or send the file serialized as a POST request. Which one would be good? Also, I have to minimize the load on the windows server. Mount the windows drive on linux instead. I could use the mount command or I could use samba here (What are the pros and cons of these two?). I can then write the compressing and copying part in linux itself. I don't trust the internet connection to be very stable, so there should be a good retry mechanism and failure protection too. What are the potential gotchas in these situations, and other points that I must be worried about? Thanks, Hari

    Read the article

  • How to implement a virtual server running Ubuntu inside a fileserver in windows?

    - by user541445
    I work in a company that has some limitations regarding their budget. They need client/server aplication. I can code the aplication, I've made mini tests on primordial applications that work. The thing is that they only have fileservers, and the application they need must be concurrent compliant, so the database must be in their local network (Fileserver is the only choice). So far, I have explored almost any option available, starting with: Desktop databases. Access (We have a license)(But concurreny is just not effective, besides it's a windows software, yuck). Sqlite (Nice, but since the information they manage is a lot, I've performed some concurrency tests with INSERTs and doing SELECTS at the same time). It fails, somehow it just stops inserting. Open office Base (I dismember a base office only to see that it was a file mode HSQLDB). I've tried, not concurrent at all. Etc (You name the Open source Desktop Database manager, and yes, I've tried that one.) Server databases Call me a stubborn person, but reading some server databases documentation say that their databases will work in a file mode. I've tried a lot of products. Postgres, MySQL, FireBird, H2, Derby, Oracle Express, IBM DB2 Express, etc. So, I really need a hand here, I've been doing this install/delete/depression thing with a lot of databases for 3 weeks, until I got with a crazy idea and I just came here to express it. So, my question comes down to this: Installing a light virtual OS software like Virtual PC and then installing in there a Server OS like an Ubuntu server inside that virtual software will work? Will it work 24/7 or when I close the virtual pc software? Will this work in a fileserver? Any suggestion, answer, critic to the place I work, crazy new concurrent database that will work in fileserver will be most than welcome.

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Family server setup

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Windows disk change monitoring for malware analysis

    - by SuperDuck
    Not sure if this question belongs to here, because it has some relations with 'serverfault' (system backups) and 'stackoverflow' (software analysis). I'm looking for a solution to monitor disk changes on a Windows system and selectively revert them. It should be able to handle live files like registry parts, so may need to be an offline backup software. It shouldn't silently pass over files which the current admin user doesn't have permissions on (files with no permission entries or owned by the 'system' user) Registry change tracking would be a bonus but is not a requirement I use virtual machines for malware analysis, there is even no solution to list file changes in disk snapshot files (delta VMDK). I currently use Ashampoo for monitoring changes. Though it's the best one between similars, it's not a good software and hasn't really evolved in many 'platinum', 'deluxe' versions released in the last 10 years (it even used non-resizable windows until the latest version). The real problem is it misses some disk / registry changes. Perhaps it only compares modification dates and doesn't catch a change if the dates are preserved. So, I think the solution should compare files using hashes, or file sizes at least. There are numerous backup software out there and I'm sure one can handle this, offline or online.

    Read the article

  • /etc/hosts: What is loghost? (fresh install of Solaris 10 update 9)

    - by cjavapro
    # # Internet host table # ::1 localhost 127.0.0.1 localhost XX.XX.XX.XX myserver loghost What is the purpose of loghost? If it was not for having loghost in there, all the /etc/hosts files on all the servers in this particular network could be identical. Edit: I looked at /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) Very interesting. when shutting down,, alerts go to all users probably through *.emerg * Looking at ifdef, it seems that the first parameter checks to see if current machine is a loghost, second parameter is what to do if it is and third parameter is what to do if it is not. Edit: If you want to test a logging rule you can use svcadm restart system-log to restart the logging service and then logger -p notice "test" to send a test log message where notice can be replaced with any type such as user.err, auth.notice, etc.

    Read the article

  • Microsoft Word 2008 on the Mac sometimes "Disappears" documents, really.

    - by Ross Charette
    This happens in a computer lab environment, has happened at least 3 times. We are running Microsoft Office 2008 for mac on Leopard, everything is updated. Our user's home directories are on a network drive, but the /Library/Cache folder is running locally. Typically a student will have a Word file that they have been working on, it's been saved before they even logged onto the computer that day. They log on, open the document, click the save icon (not go to File Save), sometimes even save multiple times, then close Word. The document is now gone. It's not hidden, there are no autosaves or anything in the Cache folder. Definitely not in the trash or trashes folder. It can't find it when you click on it in 'recent documents'. Searching meticulously though every folder in their home drive turns up nothing. They look using Finder, I look ssh'd as root into their home using ls -la. I look for similar files in case they renamed it by mistake. It's gone. Disappeared. Vaporized. It's happened to at least 3 different users in the past year. Much whining. Any idea?

    Read the article

  • Join multiple consecutive SQLite database dump files into 1 common database? Purpose: Search through ENTIRE Chrome Browsing History

    - by porg
    Google Chrome 's default web browsing history search engine only lets you access the records of the recent 100 days. Nevertheless in your application data, Chrome keeps your entire browsing history in SQLite database files, with the file naming scheme of "History Index YYYY-MM". I am looking for a way to search… …through my entire browsing history, …with sophisticated filters (limit search terms to certain fields such as URL, domain, title, body text; wildcard or regex terms, date ranges). … in … …either some ready-made software. eHistory came close, as it can limit terms to fields, but it lacks wildcards/regexes, and has the same limited time horizon as the default search. Beyond that, I could not find any suited Chrome extension or standalone (Mac) app. …or a command line to join multiple SQLite database files into one database, which I can then query (with the full syntax power). In the spirit of the pseudo code below: Preferred this way: sqlite --targetDatabase ChromeHistoryAll --importFiles /path/to/ChromeAppData/History\ Index* --importOnlyYetUnknownFiles Or if my desired feature --importOnlyYetUnknownFiles is not possible (feature could also be called "avoid duplicate imports by checking UIDs"), then by explicitly only importing files, of which I know, that they have yet not been imported into the ChromeHistoryAll database: cd ChromeAppData; sqlite --databaseTarget ChromeHistoryAll --importFiles YetNotImported1 YetNotImported2 YetNotImported3 All my queries I would then perform in the database "ChromeHistoryAll" P.S.: Additional question of general interest: Is there a way to perform a database query in a temporary database which was created on-the-fly from multiple files? Like: sqlite --query="SQL query" --targetDatabase DbAll --DBtemporaryInRAM --importFiles db1 db2 db3 This is surely not applicable for my Chrome question, as these History Index files have a combined file size of 500MB together, thus such a query would be of bad performance. But it could come handy in other situations.

    Read the article

  • Folder Sharing NTFS permissions with Share Permission

    - by Muhammad Adly
    i have a problem on my domain, the history starting from when i had a server with WIN 2008 r2 installed with the following roles installed on it (AD, DNS, DHCP, File). From 1 month i decided to install a new server 2008 r2 server to get (AD, DNS, DHCP) and leave the file server on the old one. i did the following exactly: 1) robocopy all my data on external HDD 2) Install a new server with 2008 r2 3) transfer all 5 roles to transfer the domain to the new server (MainDC) 4) issue (NETLOGON, SYSVOL) not transferred but i decided to reinitialize them again an now they are operating (MainDC) 5) re-create and re-configure a new GPOs and link it to my OUs 6) reinstall Old server operating system with a fresh installation of WIN 2008 R2 (FileServer) 7) join my domain with my domain credentials. the issue when i tried to share folder on \fileserver the permissions that i had set in sharing permissions are applied on the main shared folder and subfolders. the security settings are not applied. i.e. Say i'm sharing \fileserver\MainFolder with sharing permission for Authenticated Users that can read, so every one can read this main shared folder, if i set security permission for \fileserver\MainFolder\User1 that User1 can Read\Write\Modify. User1 can not perform this processes when accessing it from Network Share, i tried alot of steps from topics online get ownership for folder, remove inheritance from parent folder, applying changes for child objects, i tried also to construct a new folder structure but also the same issue, i tried another host PC, also i get the same issue.

    Read the article

  • 3 Server, is this a cluster scenario?

    - by HornedBeast
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? In short, how can I get the best out of my 3 hardware servers? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Python Image Library, Close method

    - by DNN
    Hello, I have been using pil for the first time today. And I wanted to resize an image assuming it was larger than 800x600 and also create a thumbnail. I could do either of these tasks separately but not together in one method (I am doing a custom save method in django admin). This returns a "cannot identify image file" error message. The error is on the line "image = Image.open(self.photo)" after "#if image is size is greatet than 800 x 600 then resize image." I thought this may be because the image is already open, but if i remove the line I still get issues. So I thought I could try closing after creating a thumbnail and then reopening. But I couldn't find a close method.... This is my code: def save(self): #create thumbnail Thumb_Size = (75, 75) image = Image.open(self.photo) if image.mode not in ('L', 'RGB'): image = image.convert('RGB') image.thumbnail(Thumb_Size, Image.ANTIALIAS) temp_handle = StringIO() image.save(temp_handle, 'jpeg') temp_handle.seek(0) suf = SimpleUploadedFile(os.path.split(self.photo.name)[-1], temp_handle.read(), content_type='image/jpg') self.thumbnail.save(suf.name+'.jpg', suf, save=False) #if image is size is greatet than 800 x 600 then resize image. image = Image.open(self.photo) if image.size[0] > 800: if image.size[1] > 600: Max_Size = (800, 600) if image.mode not in ('L', 'RGB'): image = image.convert('RGB') image.thumbnail(Max_Size, Image.ANTIALIAS) temp_handle = StringIO() image.save(temp_handle, 'jpeg') temp_handle.seek(0) suf = SimpleUploadedFile(os.path.split(self.photo.name)[-1], temp_handle.read(), content_type='image/jpg') self.photo.save(suf.name+'.jpg', suf, save=False) #enter info to database super(Photo, self).save()

    Read the article

  • Simulating Google Appengine's Task Queue with Gearman

    - by sotangochips
    One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task. This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code. My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue? The process would work like this: User request comes in and triggers a few tasks that shouldn't be blocking. Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL. The gearman server finds a worker, passes the url and post data to a worker The worker simply posts to the url with the data, thus executing the task. Assume the following: Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request. Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout) What are the potential pitfalls of such an approach? Here's one that worries me: The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit. Any thoughts?

    Read the article

  • DjangoUnicodeDecodeError while storing pickle'd data.

    - by Jack M.
    I've got a simple dict object I'm trying to store in the database after it has been run through pickle. It seems that Django doesn't like trying to encode this error. I've checked with MySQL, and the query isn't even getting there before it is throwing the error, so I don't believe that is the problem. The dict I'm storing looks like this: { 'ordered': [ { 'value': u'First\xd1ame Last\xd1ame', 'label': u'Full Name' }, { 'value': u'123-456-7890', 'label': u'Phone Number' }, { 'value': u'[email protected]', 'label': u'Email Address' } ], 'cleaned_data': { u'Phone Number': u'123-456-7890', u'Full Name': u'First\xd1ame Last\xd1ame', u'Email Address': u'[email protected]' }, 'post_data': <QueryDict: { u'Phone Number': [u'1234567890'], u'Full Name_1': [u'Last\xd1ame'], u'Full Name_0': [u'First\xd1ame'], u'Email Address': [u'[email protected]'] }>, 'user': <User: itis> } The error that gets thrown is: 'utf8' codec can't decode bytes in position 52-53: invalid data. Position 52-53 is the first instance of \xd1 (Ñ) in the pickled data. So far, I've dug around StackOverflow and found a few questions where the database encoding for the objects was wrong. This doesn't help me because there is no MySQL query yet. This is happening before the database. Google also didn't help much when searching for unicode errors on pickled data. It is probably worth mentioning that if I don't use the Ñ, this code works fine.

    Read the article

  • Set-Cookie error appearing in logs when deployed to google appengine

    - by Jesse
    I have been working towards converting one of our applications to be threadsafe. When testing on the local dev app server everything is working as expected. However, upon deployment of the application it seems that Cookies are not being written correctly? Within the logs there is an error with no stack trace: 2012-11-27 16:14:16.879 Set-Cookie: idd_SRP=Uyd7InRpbnlJZCI6ICJXNFdYQ1ZITSJ9JwpwMAou.Q6vNs9vGR-rmg0FkAa_P1PGBD94; expires=Wed, 28-Nov-2012 23:59:59 GMT; Path=/ Here is the block of code in question: # area of the code the emits the cookie cookie = Cookie.SimpleCookie() if not domain: domain = self.__domain self.__updateCookie(cookie, expires=expires, domain=domain) self.__updateSessionCookie(cookie, domain=domain) print cookie.output() Cookie helper methods: def __updateCookie(self, cookie, expires=None, domain=None): """ Takes a Cookie.SessionCookie instance an updates it with all of the private persistent cookie data, expiry and domain. @param cookie: a Cookie.SimpleCookie instance @param expires: a datetime.datetime instance to use for expiry @param domain: a string to use for the cookie domain """ cookieValue = AccountCookieManager.CookieHelper.toString(self.cookie) cookieName = str(AccountCookieManager.COOKIE_KEY % self.partner.pid) cookie[cookieName] = cookieValue cookie[cookieName]['path'] = '/' cookie[cookieName]['domain'] = domain if not expires: # set the expiry date to 1 day from now expires = datetime.date.today() + datetime.timedelta(days = 1) expiryDate = expires.strftime("%a, %d-%b-%Y 23:59:59 GMT") cookie[cookieName]['expires'] = expiryDate def __updateSessionCookie(self, cookie, domain=None): """ Takes a Cookie.SessionCookie instance an updates it with all of the private session cookie data and domain. @param cookie: a Cookie.SimpleCookie instance @param expires: a datetime.datetime instance to use for expiry @param domain: a string to use for the cookie domain """ cookieValue = AccountCookieManager.CookieHelper.toString(self.sessionCookie) cookieName = str(AccountCookieManager.SESSION_COOKIE_KEY % self.partner.pid) cookie[cookieName] = cookieValue cookie[cookieName]['path'] = '/' cookie[cookieName]['domain'] = domain Again, the libraries in use are: Python 2.7 Django 1.2 Any suggestion on what I can try?

    Read the article

  • Dojo Widgets not rendering when returned in response to XhrPost

    - by lazynerd
    I am trying to capture the selected item in a Dijit Tree widget to render remaining part of the web page. Here is the code that captures the selected item and sends it to Django backend: <div dojoType="dijit.Tree" id="leftTree" store="leftTreeStore" childrenAttr="folders" query="{type:'folder'}" label="Explorer"> <script type="dojo/method" event="onClick" args="item"> alert("Execute of node " + termStore.getLabel(item)); var xhrArgs = { url: "/load-the-center-part-of-page", handleAs: "text", postData: dojo.toJson(leftTreeStore.getLabel(item), true), load: function(data) { dojo.byId("centerPane").innerHTML = data; //window.location = data; }, error: function(error) { dojo.byId("centerPane").innerHTML = "<p>Error in loading...</p>"; } } dojo.byId("centerPane").innerHTML = "<p>Loading...</p>"; var deferred = dojo.xhrPost(xhrArgs); </script> </div> The remaining part of the page contains HTML code with dojo widgets. This is the code sent back as 'response' to the select item event. Here is a snippet: <div dojoType="dijit.layout.TabContainer" id="tabs" jsId="tabs"> <div dojoType="dijit.layout.BorderContainer" title="Dashboard"> <div dojoType="dijit.layout.ContentPane" region="bottom"> first tab </div> </div> <div dojoType="dijit.layout.BorderContainer" title="Compare"> <div dojoType="dijit.layout.ContentPane" region="bottom"> Second Tab </div> </div> </div> It renders this html 'response' but without the dojo widgets. Is handleAs: "text" in XhrPost the culprit here?

    Read the article

  • How do I pass a lot of parameters to views in Dango?

    - by Mark
    I'm very new to Django and I'm trying to build an application to present my data in tables and charts. Till now my learning process went very smooth, but now I'm a bit stuck. My pageview retrieves large amounts of data from a database and puts it in the context. The template then generates different html-tables. So far so good. Now I want to add different charts to the template. I manage to do this by defining <img src=".../> tags. The Matplotlib chart is generate in my chartview an returned via: response=HttpResponse(content_type='image/png') canvas.print_png(response) return response Now I have different questions: the data is retrieved twice from the database. Once in the pageview to render the tables, and again in the chartview for making the charts. What is the best way to pass the data, already in the context of the page to the chartview? I need a lot of charts, each with different datasets. I could make a chartview for each chart, but probably there is a better way. How do I pass the different dataset names to the chartview? Some charts have 20 datasets, so I don't think that passing these dataset parameters via the url (like: <imgm src="chart/dataset1/dataset2/.../dataset20/chart.png />) is the right way. Any advice?

    Read the article

  • Some jQuery-powered features not working in Chrome

    - by Enchantner
    I'm using a jCarouselLite plugin for creating two image galleries on the main page of my Django-powered site. The code of elements with navigation arrows is generating dynamically like this: $(document).ready(function () { $('[jq\\:corner]').each(function(index, item) { item = $(item); item.corner(item.attr('jq:corner')) }) $('[jq\\:menu]').each(function (index, item) { item = $(item); item.menu(item.attr('jq:menu')) }) $('[jq\\:carousel]').each(function(index, item) { item = $(item); var args = item.attr('jq:carousel').split(/\s+/) lister = item.parent().attr('class') + '_lister' item.parent().append('<div id="'+ lister +'"></div>'); $('#' + lister).append("<a class='nav left' href='#'></a><a class='nav right' href='#'></a>"); toparrow = $(item).position().top + $(item).height() - 50; widtharrow = $(item).width(); $('#' + lister).css({ 'display': 'inline-block', 'z-index': 10, 'position': 'absolute', 'margin-left': '-22px', 'top': toparrow, 'width': widtharrow }) $('#' + lister + ' .nav.right').css({ 'margin-left': $('#' + lister).width() + 22 }) item.jCarouselLite({ btnNext: '#' + lister + ' .nav.right', btnPrev: '#' + lister + ' .nav.left', visible: parseInt(args[0]) }) }) The problem is that if page is loaded through an url, typed in the adress bar, some functions doesn't work and the second gallery appears with the wrong parameters, but if I came to this page via clicking link - everything works perfectly. It happends only in Google Chrome (Ubuntu, stable 5.0.360.0), but not in Firefox or Opera. What could be the reason?

    Read the article

  • Google App Engine, parsedatetime and TimeZones

    - by Ron
    Hey guys, I'm working on a Google App Engine / Django app and I encountered the following problem: In my html I have an input for time. The input is free text - the user types "in 1 hour" or "tomorrow at 11am". The text is then sent to the server in AJAX, which parses it using this python library: http://code.google.com/p/parsedatetime/. Once parsed, the server returns an epoch timestamp of the time. Here is the problem - Google App Engine always runs on UTC. Therefore, lets say that the local time is now 11am and the UTC time is 2am. When I send "now" to the server it will return "2am", which is good because I want the date to be received in UTC time. When I send "in 1 hour" the server will return "3am" which is good, again. However, when I send "at noon" the server will return "12pm" because it thinks that I'm talking about noon UTC - but really I need it to return 3am, which is noon for the request sender.. I can pass on the TZ of the browser that sends the request, but that wont really help me - the parsedatetime library wont take a timezone argument (correct me if I'm wrong). Is there a walk around this? Maybe setting the environments TZ somehow? Thanks!

    Read the article

  • ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN

    - by fish2000
    I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not. It's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published. What is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get. I couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers. should I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.

    Read the article

  • accessing list sent from server as JSON object

    - by tazim
    How to access a list sent in form of json object using django to the template received in ajax callback function . The code is as follows : views.py def showfiledata(request): with open("/home/tazim/webexample/test.txt") as f: list = f.readlines() f.closed return_dict = {'filedata':list} json = simplejson.dumps(return_dict) HttpResponse(json,mimetype="application/json") in template showfile.html: < html> < head> < script type="text/javascript" src="/jquerycall/">< /script> < script type="text/javascript"> $(document).ready(function() { $("button").click(function() { $.ajax({ type:"POST", url:"/showfiledata/", datatype:"json", success:function(data) { var s = data.filedata; $("#someid").html(s); } }); }); }); < /script> < /head> < body> < form method="post"> < button type="button">Click Me< /button> < div id="someid">< /div> < /form> < /body> < /html>

    Read the article

  • How do you store accented characters coming from a web service into a database?

    - by Thierry Lam
    I have the following word that I fetch via a web service: André From Python, the value looks like: "Andr\u00c3\u00a9". The input is then decoded using json.loads: >>> import json >>> json.loads('{"name":"Andr\\u00c3\\u00a9"}') >>> {u'name': u'Andr\xc3\xa9'} When I store the above in a utf8 MySQL database, the data is stored like the following using Django: SomeObject.objects.create(name=u'Andr\xc3\xa9') Querying the name column from a mysql shell or displaying it in a web page gives: André The web page displays in utf8: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> My database is configured in utf8: mysql> SHOW VARIABLES LIKE 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_unicode_ci | | collation_server | utf8_unicode_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ 8 rows in set (0.00 sec) How can I retrieve the word André from a web service, store it properly in a database with no data loss and display it on a web page in its original form?

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >