Search Results

Search found 4496 results on 180 pages for 'django uploads'.

Page 160/180 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Should I use heroku or should I have my own ssl? [closed]

    - by user1744649
    Base on your experience, can you please advice what will be better for me? Issue : I build applications and there are 2 major constraints. 1. ssl is needed since I used facebook api's. So, only heroku is a good option. 2. My web components tend to hit the Max_Execution_Time very often, since I pull a lot of data using the facebook api. Future possible purpose of this site : 1. Will use more apis from google, twitter, future. 2. Might request for donations. 3. Just for hobby. I have two options : 1. Create a web site in heroku itself by converting all the php components to a background worker in python using django. 2. Dont use heroku at all. Do the the complete hosting with godaddy (shared plan). And buy an ssl so that I can use fb apis etc. In this scenario, what do you suggest me to do?

    Read the article

  • For what purpose I can use c++ to increase my skills?

    - by user824981
    I want to learn new things. Initially I was a PHP programmer. Then I thought it was not enough. Then I started learning Java thing. It took me 3 months to learn. Java, J2EE, Spring, Hibernte, Spring Security, Spring Roo and many design patterns MVC and stuff like AOP, DI . I never knew that before but I got the idea what J2EE. After 3 months, I just made a simple page with Registration form integrated with Spring Security. I wanted to make one complete project in it but that was too much for me and I didn't want spend more time on it as then i need to host that as well so I left that. Then I started learning Python and made few sys admin scripts and then Django and now I am finishing a complete web app in Python. Now I want to learn C++, but before that I need to find out what i can do with it. Just like I know Python is very useful because I have my own servers so I can write scripting and websites so Python is good for me. But I am confused in which areas C++ can help me. I don't want to end up like I have with Java where either I have big projects or nothing for day to day use.

    Read the article

  • I am having a hard time learning Python, is it just me? [closed]

    - by Carpet
    For the past two weeks I am trying to learn Python and a framework for web development, while doing so I learned a lot but not what I was looking for. I did manage to get everything set up and running, followed tutorials, but I still have not managed to create a navigation bar and a simple template website. My goal is to create web applications (like a blog) and perhaps platforms similar to stackoverflow. In which language was stackoverflow created in? I believe that Python Django or Python Tornado (which I tried) is more for people who have learned desktop application development. It is hard for me to make sense out of the complex and fragmented system. I'm able to develop with PHP and have already created blogs and similar applications. If Python and a framework is not for me, what type of language would be for me, which languages are used for these type of platforms, I would like to develop myself? I only omitted PHP because I found it later on a bit too inheriting, and the code is hardly readable and becomes quickly cluttered, I love how readable Python code is.

    Read the article

  • NHibernate 2 Beginner's Guide Review

    - by Ricardo Peres
    OK, here's the review I promised a while ago. This is a beginner's introduction to NHibernate, so if you have already some experience with NHibernate, you will notice it lacks a lot of concepts and information. It starts with a good description of NHibernate and why would we use it. It goes on describing basic mapping scenarios having primary keys generated with the HiLo or Identity algorithms, without actually explaining why would we choose one over the other. As for mapping, the book talks about XML mappings and provides a simple example of Fluent NHibernate, comparing it to its XML counterpart. When it comes to relations, it covers one-to-many/many-to-one and many-to-many, not one-to-one relations, but only talks briefly about lazy loading, which is, IMO, an important concept. Only Bags are described, not any of the other collection types. The log4net configuration description gets it's own chapter, which I find excessive. The chapter on configuration merely lists the most common properties for configuring NHibernate, both in XML and in code. Querying only talks about loading by ID (using Get, not Load) and using Criteria API, on which a paging example is presented as well as some common filtering options (property equals/like/between to, no examples on conjunction/disjunction, however). There's a chapter fully dedicated to ASP.NET, which explains how we can use NHibernate in web applications. It basically talks about ASP.NET concepts, though. Following it, another chapter explains how we can build our own ASP.NET providers using NHibernate (Membership, Role). The available entity generators for NHibernate are referred and evaluated on a chapter of their own, the list is fine (CodeSmith, nhib-gen, AjGenesis, Visual NHibernate, MyGeneration, NGen, NHModeler, Microsoft T4 (?) and hbm2net), examples are provided whenever possible, however, I have some problems with some of the evaluations: for example, Visual NHibernate scores 5 out of 5 on Visual Studio integration, which simply does not exist! I suspect the author means to say that it can be launched from inside Visual Studio, but then, what can't? Finally, there's a chapter I really don't understand. It seems like a bag where a lot of things are thrown in, like NHibernate Burrow (which actually isn't explained at all), Blog.Net components, CSS template conversion and web.config settings related to the maximum request length for file uploads and ending with XML configuration, with the help of GhostDoc. Like I said, the book is only good for absolute beginners, it does a fair job in explaining the very basics, but lack a lot of not-so-basic concepts. Among other things, it lacks: Inheritance mapping strategies (table per class hierarchy, table per class, table per concrete class) Load versus Get usage Other usefull ISession methods First level cache (Identity Map pattern) Other collection types other that Bag (Set, List, Map, IdBag, etc Fetch options User Types Filters Named queries LINQ examples HQL examples And that's it! I hope you find this review useful. The link to the book site is https://www.packtpub.com/nhibernate-2-x-beginners-guide/book

    Read the article

  • Large File Upload in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay this is a big BIG B-I-G problem. And with SP2010 it’s going to be more prominent, because atleast at the server side, SharePoint can support large files much much better than SharePoint 2007 ever did. The issue with very large files being uploaded through any browser based API are - Reliably transferring gigabyte or bigger files without breakages over a protocol like HTTP, which is better suited for tiny transfers like images and text. Not killing your browser because it has to load all that in memory Not killing your web server because All that you upload through HTTP post, first gets streamed into IIS Memory, w3wp.exe memory before the ENTIRE FILE finishes uploading .. before it is stored. Which means, You cannot show an accurate and live progress bar of the upload, IIS gives you no such accurate metric of an upload. All the counters it gives you are approximate. Your w3wp.exe eats up all server memory – 4GB of it, for a 4GB upload. A thread is kept busy for the entire duration of the upload, thereby greatly limiting your web server’s capability to serve newer requests. Kills effective load balancing. Not killing your content database because, As you are uploading a very large file, that large file gets written sequentially into the DB, and therefore for a very large file very severely impacts the database performance. I had put together another video showing RBS usage in SharePoint 2010. I talked about many practical ramifications of using RBS in SharePoint in that video. Note that enabling large file support will never ever be a point and click job, simply because there are too many questions one needs to ask, and too many things one needs to plan for. However, one part that will remain common across all large file upload scenarios, in SharePoint or outside of SharePoint is to do it efficiently while not killing the web server. In this video, I describe using the Telerik Silverlight Upload control with SharePoint 2010 to enable efficient large file uploads in SharePoint. Presenting .. The video Comment on the article ....

    Read the article

  • BoundingBox created from mesh to origin, making it bigger

    - by Gunnar Södergren
    I'm working on a level-based survival game and I want to design my scenes in Maya and export them as a single model (with multiple meshes) into XNA. My problem is that when I try to create Bounding Boxes(for Collision purposes) for each of the meshes, the are calculated from origin to the far-end of the current mesh, so to speak. I'm thinking that it might have something to do with the position each mesh brings from Maya and that it's interpreted wrongly... or something. Here's the code for when I create the boxes: private static BoundingBox CreateBoundingBox(Model model, ModelMesh mesh) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); BoundingBox result = new BoundingBox(); foreach (ModelMeshPart meshPart in mesh.MeshParts) { BoundingBox? meshPartBoundingBox = GetBoundingBox(meshPart, boneTransforms[mesh.ParentBone.Index]); if (meshPartBoundingBox != null) result = BoundingBox.CreateMerged(result, meshPartBoundingBox.Value); } result = new BoundingBox(result.Min, result.Max); return result; } private static BoundingBox? GetBoundingBox(ModelMeshPart meshPart, Matrix transform) { if (meshPart.VertexBuffer == null) return null; Vector3[] positions = VertexElementExtractor.GetVertexElement(meshPart, VertexElementUsage.Position); if (positions == null) return null; Vector3[] transformedPositions = new Vector3[positions.Length]; Vector3.Transform(positions, ref transform, transformedPositions); for (int i = 0; i < transformedPositions.Length; i++) { Console.WriteLine(" " + transformedPositions[i]); } return BoundingBox.CreateFromPoints(transformedPositions); } public static class VertexElementExtractor { public static Vector3[] GetVertexElement(ModelMeshPart meshPart, VertexElementUsage usage) { VertexDeclaration vd = meshPart.VertexBuffer.VertexDeclaration; VertexElement[] elements = vd.GetVertexElements(); Func<VertexElement, bool> elementPredicate = ve => ve.VertexElementUsage == usage && ve.VertexElementFormat == VertexElementFormat.Vector3; if (!elements.Any(elementPredicate)) return null; VertexElement element = elements.First(elementPredicate); Vector3[] vertexData = new Vector3[meshPart.NumVertices]; meshPart.VertexBuffer.GetData((meshPart.VertexOffset * vd.VertexStride) + element.Offset, vertexData, 0, vertexData.Length, vd.VertexStride); return vertexData; } } Here's a link to the picture of the mesh(The model holds six meshes, but I'm only rendering one and it's bounding box to make it clearer: http://www.gsodergren.se/portfolio/wp-content/uploads/2011/10/Screen-shot-2011-10-24-at-1.16.37-AM.png The mesh that I'm refering to is the Cubelike one. The cylinder is a completely different model and not part of any bounding box calculation. I've double- (and tripple-)-checked that this mesh corresponds to this bounding box. Any thoughts on what I'm doing wrong?

    Read the article

  • Basic web architecture : Perl -> PHP

    - by Sunny Jim
    This is an architecture question. If there is a better forum, please redirect me. Apologies in advance. Essentially every website is built around a relational database, right? When a user uploads form data, that data is stored in a table. The problem is that the table structure(s) need to be modified whenever the website form is modified. Although I understand that modern web frameworks work around this problem by automatically building forms based on the table structure. For the last 20 years, I have been building websites using Perl. When I first encountered this problem, the easiest solution was to save serialized Perl objects as data BLOBS. After XML's introduction, this solution worked even better because XML is so effective for representing arbitrary data. This approach is consistent with the original Perl principles of Hubris, Laziness, and Impatience and I'm pretty committed to it. Obviously, the biggest drawback is that this solution locks me into the Perl interpreter. So instead, I've just completed a prototype of a universal RDB table. The prototype is written in Perl but porting it to PHP will be a good chance to develop those skills. The principal is based on the XML::Dumper module, which converts arbitrary Perl data structures into uniform XML. With my approach, each XML node is stored as a table record. I underestimated this undertaking and rolled something up myself. But the effort allows me to discuss the basic design instead of implementation details. As mentioned, I'm pretty committed to this approach of using flexible data structures. It's been successfully deployed on many websites, large, and complex. But are there any drawbacks I've overlooked? I rolled my own. Are other people taking a similar approach to their data? What kinds of solutions are available? I have not abandoned my dream of eventually contributing something useful to the worldwide community. In order to proceed, the next step would be peer review. How does one pursue that effort? Thanks! -Jim

    Read the article

  • facebook share and opengraph

    - by hannit cohen
    I'm managing a video blog. The blog contains a main youtube video on the homepage and several thumbnail images of other videos elsewhere. I have a share button for the main page and the single posts pages and have added opengraph to specify the used image. For some reason facebook ignores my opengraph image and uses othe images it finds in the page... the header looks like this: (for the homepage) <!-- Facebook Opengraph --> <meta property="fb:app_id" content="155967927783206" /> <meta property="og:url" content="http://mttv.co.il/2010/12/%d7%9e%d7%a4%d7%92%d7%a9-%d7%a1%d7%99%d7%99%d7%a2%d7%95%d7%aa-%d7%92%d7%a0%d7%a0%d7%95%d7%aa-%d7%9c%d7%93%d7%95%d7%a8%d7%aa%d7%99%d7%94%d7%9d-1959-2010-%d7%91%d7%9e%d7%a2%d7%9c%d7%95%d7%aa/"/> <meta property="og:title" content="???? ?????? ????? ???????? 1959-2010 ??????" /> <meta property="og:description" content="???? ????? ?? ?????? ????? ?????? ????????? ???? ?????? 1959 ??? 2010 ?????? ????? ??????? ???' ??? ??? ????? ???? ??????? ???????? ?????? ?&quot;? ???? ??? ?&quot;? ????? ?????? ????? ???? 08.12.10 " /> <meta property="og:type" content="article" /> <meta property="og:image" content="http://www.mttv.co.il/wp-content/uploads/2010/12/Gvi91UEjCAw_mid-135x77.jpg" /> The website address is: http://mttv.co.il Any help will be appreciated

    Read the article

  • New Features Of WordPress 3.3 You Must Know

    - by Gopinath
    After months of beta testing, WordPress 3.3 version is going to be released at the end of this month. There are several new features packed in the new version and few of them are going to excite WordPress admins. In this post we are going to discuss about the exciting new features. 1. Drag and Drop Media Uploads One of the biggest improvements in this version of WordPress is it’s all new media uploader. Now you can upload multiple files by just dragging & dropping, instantly resize  the images and filter files by their type. The media upload sports a brand new look WordPress adopted the Pupload plugin to power its media uploader component and it’s written by the same team who created the popular TinyMCE editor plugin. 2. Improved Admin Bar(Toolbar) The admin bar or newly called toolbar has got handful of makeovers. The not so much used items like Search box and other elements are removed to make sure that the bar is not clumsy. The user menu and the related options are moved to the right like how we see in Google’s user bar. Also there are few changes to the colour of the bar to make it more eye friendly. 3. Fly out Admin Menus All the left side bar menus of WordPress admin are now sports a fly out menu style to save a click. In the previous versions if you want to access a sub menu on the left side bar, you need to first click on the category and then choose the menu item from the expanded list. Now on just mouse over you will see a flyout of menu items. 4. Adaptive Admin – Layout Auto Adjust To Fit Various Devices If you own an iPad or any other so called tablets then you are going to love this feature. The admin site of WordPress has got a lot more friendly with tablets and smartphones. WordPress now auto adjusts layout to fit the device through which you are accessing the admin site.  Accessing admin dashboard on your tablets is going to be more fun. 5. Other Features Now that we have read the most useful 4 features here is a small list of other features that may interest you Nice Tooltips are displayed where ever possible to help the newbies to understand the usage of admin site Responsive Layouts jQuery 1.7 and jQuery UI 1.8.16 are the power horses of WordPress Performance improvements This article titled,New Features Of WordPress 3.3 You Must Know, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • The Connected Company: WebCenter Portal Activity Streams

    - by Michael Snow
    Guest post by Mitchell Palski, Oracle Staff Sales Consultant Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Social media is sure to have made its way into your company or government organization. Whether its discussion threads, blog posts, Facebook-style profile-pages, or just a simple Instant Messenger application; in one way or another, your employees are connected. What are the objectives of leveraging social media in your organization? Facilitating knowledge transfer More effectively organizing team events Generating inter-community discussions to solve problems Improving resource management Increasing organizational awareness Creating an environment of accountability Do any of the business objectives above stand out to you as needs? If so, consider leveraging the WebCenter Portal Activity Stream as part of your solution. In WebCenter Portal, the Activity Stream feature provides a streaming view of the activities of your connections, actions taken in portals, and business activities that looks a lot like a combined Facebook and Twitter newsfeed. Activity Stream can note when a user: Posts feedback (comments) Uploads a document Creates a new blog, page, event, or announcement Starts a new discussion Streams messages and attachments entered through WebCenter Publisher (similar to Twitter) Through Activity Stream Preferences, you can select which of these activities to show or hide from your personal Activity Stream. Here’s what you get: Real-time stream of activities with in a Portal or sub-Portal increases awareness across your organization or within a working group Complete list of user actions reduces the time-to-find for users that need to interact with the latest activities in your portal Users can publish to their groups when tasks are finished for complete group traceability and accountability, as well as improved resource management. Project discussions and shared documents that require the expertise of someone outside of a working group now get increased visibility across your organization. There’s a reason that commercial Social Media tools like Facebook and Twitter have been so successful – they spread information in an aesthetically appealing and easy to read format.  Strategically placing an Activity Feed within your Portal is analogous to sending your employees a daily newsletter, events calendar, recent documents report, and list of announcements – BUT ALL IN ONE! 

    Read the article

  • A better way to organize your Silverlight Code Snippets.

    - by mbcrump
    I hate re-writing code. I also hate it when I find a great code snippet on the web and forget to bookmark it or it gets lost in my endless sea of bookmarks. So what do you do to get around this? This is the question that I was asking myself at the end of 2010. How can I get my Silverlight code organized? My requirements for a snippet manager were: Needs to be FREE. An easy way to view XAML/C# code behind together in one “view”. I wanted the ability to store the code snippets in cloud in case my HDD dies. Searchable Keywords to quickly find code snippets. I started looking for a snippet manager that would allow me to do just that and finally found Snippet Manager. Before going any further, I think that one of the most important things to note here is that this software supports 37 languages. It’s not just for Silverlight developers nor C# only guys. The software supports Java, SQL and even COBOL.   Below is a screenshot of the Snippet Manager that shows my Silverlight code snippet. You will notice that I have highlighted two sections. The top part is my XAML and the bottom is my C# code behind. I’ve included a sample below of my code snippets so that you can get an idea of how I organized it. Another thing that’s great about this software is that it supports plain text. I added some connection strings in the TEXT section below.  Once you have finished adding your code snippets, you can store them in the cloud. I created a FTP directory called “snippets” on my FTP Server and hit the upload button once I am finished adding my new codes snippets. This will allow me to use the code snippets on another computer with this application on my USB Key. See screenshots below: Enter your FTP credentials below: Hit the Uploads button on the Toolbar: Login in to your FTP Server and verify the following files are now on the FTP Server: Another great feature of the Snippet Manager is that you can also integrate this into VS2010 by clicking Tools –> External Tools: And setting up your External Screen to point to the Executable: You can now launch it by going to Tools –> Snippet Manager. If you want you could also a shortcut to launch the program with HotKeys. As you can see, this is a nice little program that includes everything needed to organize your code snippets very clean. I didn’t go over every feature but this is something that you might want to download and give it a shot.  Subscribe to my feed CodeProject

    Read the article

  • What is the right way to group this project into classes?

    - by sigil
    I originally asked this on SO, where it was closed and recommended that I ask it here instead. I'm trying to figure out how to group all the functions necessary for my project into classes. The goal of the project is to execute the following process: Get the user's FTP credentials (username & password). Check to make sure the credentials establish a valid connection to the FTP server. Query several Sharepoint lists and join the results of those queries to create a list of items that need to have action taken on them. Each item in the list has a folder. For each item: Zip the contents of the folder. Upload the folder to the FTP server using SFTP Update the item's Sharepoint data. Email the user an Excel report showing, e.g., Items without folder paths Items that failed to zip or upload Steps 2-5 are performed on a periodic basis; if step 2 returns an invalid connection, the user is alerted and the process returns to step 1. If at any point the user presses a certain key, the process terminates. I've defined the following set of classes, each of which is in its own .cs file: SFTP: file transfer processes DataHandler: Sharepoint data retrieval/querying/updating processes. Also makes and uploads the zip files. Exceptions: Not just one class, this is the .cs file where I have all of my exception classes. Report: Builds and sends the report. Program: The main class for running the program. I recognize that the DataHandler class is a god object, but I don't have a good idea of how to refactor it. I feel like it should be more fine-grained than just breaking it into Sharepoint, Zip, and Upload, but maybe that's it. Also, I haven't yet worked out how to combine the periodic behavior with the "wait for user input at any point in the process" part; I think that involves threads, which means other classes to manage the threads... I'm not that well-versed in design patterns, but is there one that fits this project well? If this is too big of a topic to neatly explain in an SO answer, I'll also accept a link to a good tutorial on what I'm trying to do here.

    Read the article

  • Windows scheduled task fails to complete with error code 0xc000013a

    - by Brian
    I'm using Windows Server 2003 and have a scheduled task that fails to complete. The task is set to run a Windows Command Script (.cmd) at 3pm each day. The script runs a program that extracts some data from a SQL Server database and uploads that data to an FTP server. The error code displayed in the "Last result" column of the scheduled tasks folder is 0xc000013a. A quick Google search leads to this Microsoft support page that states: The most common "C" error code is "0xC000013A: The application terminated as a result of a CTRL+C". No-one is logged in at the time the task runs, so there's no-one around to press CTRL+C. I'm not sure I understand what is being said here in the Microsoft documentation. I've checked the rudimentary things - the scheduled task is enabled, scheduled to run each day, and pointing to a file that does exist in a valid location. Interestingly, when I run this task manually (either by running the .cmd script from the command line, or by right-clicking the task and clicking "Run") the task completes successfully. What does this error code mean, and how can I get this task to run when I'm not there to force it?

    Read the article

  • subneting .. is 192.168.0.1/24 different than 192.168.0.1/25

    - by SM
    So i am trying to understand subnetting. I understand that you can take 192.168.0.0/24 domain and break it up in 2 subnets. 192.168.0.0/25 and 192.168.0.128/25 .. so what if i need to create 2 more subnets inside both of those above subnets. What if i need to create 2 subnets, which have 2 more subnets inside them and each of those 2 subnets have 2 hosts. As per my calculation, this would be something like. top 2 subnets: 192.168.0.0/25 and 192.168.0.128/25 subnets of 192.168.0.0/25: 192.168.0.0/26 - 192.168.0.128/26 subnets of 192.168.0.128/25: 192.168.0.128/26 - 192.168.0.192/26 hosts of 192.168.0.0/25: 192.168.0.1/25 - 192.168.0.2/25 hosts of 192.168.0.128/25: 192.168.0.129/25 - 192.168.0.130/25 hosts of 192.168.0.0/26: 192.168.0.1/26 - 192.168.0.2/26 hosts of 192.168.0.128/26: 192.168.0.129/26 - 192.168.0.130/26 hosts of 192.168.0.128/26: 192.168.0.129/26 - 192.168.0.130/26 hosts of 192.168.0.192/26: 192.168.0.193/26 - 192.168.0.194/26 The above doesnt seem right, i am moving down like a tree, however the subnet mask and IPs are being repeated, i am just add 1 to /x and making the ips from there. Can anyone please tell me if this is correct The scenario i am trying to understand is similar to this, however i just want to add hosts on each level as well. http://www.ciscotrick.com/wp-content/uploads/2008/08/the_mathematical_relationship_between_network_block_sizes.jpg

    Read the article

  • flask, lighttpd with fastcgi can't get it to work

    - by kurojishi
    i'm tring to deploy a simple flask script to a lighttpd server with fastcgi. this is the configuration file for lighttpd builded using the flask documentation http://flask.pocoo.org/docs/deploying/fastcgi/#configuring-lighttpd server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) var.home_dir = "/var/lib/lighttpd" var.socket_dir = home_dir + "sockets/" ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" fastcgi.server = ("weibo/callback.fcgi" => (( "socket" => "/tmp/weibocrawler-fcgi.sock", "bin-path" => "/var/www/weibo/callback.fcgi", "check-local" => "disable", "max-procs" => 1 )) ) url.rewrite-once = ( "^(/weibo($|/.*))$" => "$1", "^(/.*)$" => "weibo/callback.fcgi$1" and this is the script i'm tring to run: #!/home/nrl/kuro/weiboenv/bin/python from flup.server.fcgi import WSGIServer from callback import app if __name__ == '__main__': WSGIServer(application, bindAddress='/tmp/weibocrawler-fcgi.sock').run() but i have this error testing the configuration file i get this error: 2013-07-02 17:15:42: (configfile.c.912) source: lighttpd.conf.new line: 52 pos: 1 parser failed somehow near here: weibo/callback.fcgi$1 when i remove the urlrewrite i get these errors in the log even if the daemon start: 2013-07-02 16:25:53: (log.c.166) server started 2013-07-02 16:25:53: (mod_fastcgi.c.1104) the fastcgi-backend fcgi.py failed to start: 2013-07-02 16:25:53: (mod_fastcgi.c.1108) child exited with status 2 fcgi.py 2013-07-02 16:25:53: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2013-07-02 16:25:53: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2013-07-02 16:25:53: (server.c.938) Configuration of plugins failed. Going down.

    Read the article

  • Unable to back up SQL Server databases using a maintenance plan

    - by Diogo Lopes
    I am trying to create a maintenance plan that will run automatically and back up my SQL Server 2005 databases automatically. I create a new maintenance plan and add a "Back Up Database Task", select all User databases, and choose a path to back up to. IMAGE in http://www.freeimagehosting.net/uploads/16be7dce43.jpg [new user limitation] When I save and try to execute this plan, I get the following error message: =================================== Execution failed. See the maintenance plan and SQL Server Agent job history logs for details. =================================== Job 'Backup.Subplan_1' failed. (SqlManagerUI) I've checked the maintenance plan log, the agent log, and just about every log file I can find and there are no entries at all to help me figure out why this is failing. If I right-click on a specific database and select "Back Up", the task succeeds. I tried changing the plan to back up just that one database and it still failed. I've tried running the plan with both Windows authentication and SQL Server authentication with the sa account. I also tried specifically granting the SQL Server Agent user account full privileges on the backup folder, but it still failed. Thanks for any suggestions!

    Read the article

  • Webmin / Virtualmin running php as www-data, is locked out of viewing .htaccess and writing

    - by Kirill
    I've asked this on the virtualmin forums, but haven't had any help from there. Recently, "something" happened and it seems that the apache service has gone a bit weird. What it does: it runs all apache traffic as www-data and sometimes spawns the php5-cgi process as www-data, this is a problem because all the domain users own their directories and default permissions don't let www-data write to these folders (file uploads are dead) or read .htaccess (permalinks are broken in wordpress). I've googled this for about a week straight now, tried pretty much everything I could find and achieved nothing. The only thing that I think might actually be the cause of all this is this page: http:// - i.imgur.com/NYW3x.png (got shut down by the spam filter) So I figured if I set it to "default", this might magically start working again, but all it does is "crash" apache (all websites timeout). I figure it's something to do with the "mpm" module or something, but I can't find anything relevant in the settings to modify for it to work. Can someone please point me in the right direction? System info: Webmin version 1.580 Kernel and CPU Linux 2.6.35.4-rscloud on x86_64 Virtualmin version 3.90.gpl GPL Ubuntu 10.04 LTS (Lucid) A couple screenshots of top http://i.imgur.com/U2DTK.png http://i.imgur.com/sNPKs.png

    Read the article

  • Why isn't Google Web Toolkit more popular?

    - by gerdemb
    I've recently become intrigued with Google Web Toolkit and have started playing with it on some personal projects. I've noticed though that it doesn't seem to be very popular. For example, two major freelancing job boards (www.elance.com and www.odesk.com) list no jobs for GWT and the list of projects using it on Google's official site is pretty slim http://code.google.com/webtoolkit/app_gallery.html (compare to Django projects for example http://www.djangosites.org/). This seems odd to me as GWT has been around since 2006 and is supported by the Google brand name. It also neatly solves the problem of creating cross-browser completely dynamic websites that I haven't seen possible with any other tool. So, why the lack of acceptance?

    Read the article

  • PHP form class

    - by Oli
    I'm used to ASPNET and Django's methods of doing forms: nice object-orientated handlers, where you can specify regexes for validation and do everything in a very simple way. After months living happily without it, I've had to come back to PHP for a project and noticed that everything I used to do with PHP forms (manual output, manual validation, extreme pain) was utter rubbish. Is there a nice, simple and free class that does form generation and validation like it should be done? Clonefish has the right idea, but it's way off on the price tag.

    Read the article

  • [Zend Framework - Ubuntu10.04- Lamp- First Project] i get 500 error on http://localhost/zftutorial/p

    - by meyosef
    Hi I new in Zend and Lamp, my software: Zend Framework, Ubuntu10.04,Lamp. I made my first Zend Project with Zend tool (according this tutorial http://akrabat.com/wp-content/uploads/Getting-Started-with-Zend-Framework.pdf) But when i go to http://localhost/zftutorial/public i get 500 error. My $ dir -l of zftutorial: drwxr-xr-x 6 root root 4096 2010-06-01 23:54 application drwxr-xr-x 2 root root 4096 2010-06-01 23:54 docs drwxr-xr-x 3 root root 4096 2010-06-02 00:23 library drwxr-xr-x 3 root root 4096 2010-06-02 00:00 nbproject drwxr-xr-x 2 root root 4096 2010-06-01 23:54 public drwxr-xr-x 4 root root 4096 2010-06-01 23:54 tests my:/etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Thanks,

    Read the article

  • Python web devlopment framework for python 3.1 user

    - by iama
    I have been learning python for some time now. While starting this "learning python" endeavor I decided to learn the latest and greatest 3.1 version of python. I regret this decision now because I wanted to try my hands on some of the python web development frameworks & it looks like many of them does not support 3.1 yet & it looks like it might take them years to support the new version of Python especially Django and TurboGears. This is really disappointing. Therefore, SO users, do you have any recommendation for a web framework for me that runs on 3.1 and supports some of the modern (I guess I will never learn ;-)) web framework features like MVC/ORM/URL Routing/Caching etc. Many thanks for your response.

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • on debian, lighttpd apache2 using 80 port, lighttpd throws :address already use error

    - by user1960581
    I bought the linode(linode.com) server the other day. I've been trying to run lighttpd and apache2 at the same port, using lighttpd for static files. As linode is only providing ONE ipv4 address, I tried to bind lighttpd on the ipv6 address. That's where I got the same error each and very single time: can't bind to port [ipv6] 80 Address already in use. I tried bind the ipv4 address. Everything worked. Please help me, this is driving me nuts for the last two days. my lighttpd.conf file:(the ipv6 address isn't true) server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", # "mod_rewrite", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" server.port = 80 server.bind = "2600:3c02::0000" server.use-ipv6 = "enable" #server.pid-file = "/var/run/lighttpd.pid" index-file.names = ( "index.php", "index.html", "index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/javascript", "text/css", "text/html", "text/plain" ) # default listening port for IPv6 falls back to the IPv4 port #include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ### ipv6 ### $SERVER["socket"] == "[2600:3c02::0000]:80" { # accesslog.filename = "var/log/lighttpd/ipv6/access.log" # server.document-root = "/var/www/" # server.error-handler-404 = "/index.php?error=404" } and the error message: can't bind to port, 2600:3c02::0000 Address already in use.

    Read the article

  • Python virtualenv conflicting

    - by Fernando
    I'm trying to learn Django, so I started by reading about virtualenv. After installing it with pip (, I end up with: ... sudo pip install virtualenv) ... virtualenv paths virtualenv at /usr/local/bin/virtualenv and virtualenv-2.7 at /usr/local/bin/virtualenv-2.7 If I use virtualenv-2.7 it seems to work fine, but if I use virtualenv, new modules get added to /usr/local/bin, instead of being inside the environment. Example cd ~ virtualenv v1 source v1/bin/activate easy_install yolk which yolk # /usr/local/bin If I use virtualenv-2-7, yolk gets installed correctly inside v1. Did I mess up the installation? How can I fix this? (maybe uninstall virtualenv and start over). Thanks for any help! Edit: I figured i have two easy_install bins /usr/bin/easy_install-2.7 and /usr/bin/easy_install easy_install --version distribute 0.6.24dev-r0 easy_install-2.7 --version distribute 0.6.24dev-r0 so this may be the cause of problems. More info: python version: 2.7.3 virtualenv version: 1.10.1

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >