Search Results

Search found 5772 results on 231 pages for 'django exceptions'.

Page 178/231 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Is QtQuick.Controls available on Ubuntu 13.10

    - by javascript is future
    I was looking to do UI development in QML, and I really want it to look native. I found the QtQuick.Controls (http://qt-project.org/doc/qt-5.1/qtquickcontrols/qtquickcontrols-index.html), but when I try make a simple application, it tells me that QtQuick.Controls isn't installed. main.qml: import QtQuick 2.1 import QtQuick.Controls 1.0 Rectangle { height: 200 width: 200 } terminal: $ qmlscene main.qml file:///tmp/main.qml:2 module "QtQuick.Controls" is not installed Also, I downloaded the source from https://qt.gitorious.org/qt/qtquickcontrols/source/stable, ran qmake && make, but this returned the following output: cd src/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /tmp/qtquickcontrols/src/src.pro -o Makefile ) && make -f Makefile make[1]: Går til katalog '/tmp/qtquickcontrols/src' cd controls/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /tmp/qtquickcontrols/src/controls/controls.pro -o Makefile ) && make -f Makefile make[2]: Går til katalog '/tmp/qtquickcontrols/src/controls' g++ -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -O2 -fvisibility=hidden -fvisibility-inlines-hidden -std=c++0x -fno-exceptions -Wall -W -D_REENTRANT -fPIC -DQT_NO_XKB -DQT_NO_EXCEPTIONS -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DQT_PLUGIN -DQT_QUICK_LIB -DQT_QML_LIB -DQT_WIDGETS_LIB -DQT_NETWORK_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I/usr/share/qt5/mkspecs/linux-g++ -I. -I/usr/include/qt5 -I/usr/include/qt5/QtQuick -I/usr/include/qt5/QtQml -I/usr/include/qt5/QtWidgets -I/usr/include/qt5/QtNetwork -I/usr/include/qt5/QtGui -I/usr/include/qt5/QtGui/5.1.1 -I/usr/include/qt5/QtGui/5.1.1/QtGui -I/usr/include/qt5/QtCore -I/usr/include/qt5/QtCore/5.1.1 -I/usr/include/qt5/QtCore/5.1.1/QtCore -I.moc/release-shared -o .obj/release-shared/qquickaction.o qquickaction.cpp qquickaction.cpp:49:39: fatal error: private/qguiapplication_p.h: No such file or directory #include <private/qguiapplication_p.h> ^ Is there some PPA I could use, or do I have to wait for Trusty to get out, before I can use native controls from Qt? Regards

    Read the article

  • Thread safe GUI programming

    - by James
    I have been programming Java with swing for a couple of years now, and always accepted that GUI interactions had to happen on the Event Dispatch Thread. I recently started to use GTK+ for C applications and was unsurprised to find that GUI interactions had to be called on gtk_main. Similarly, I looked at SWT to see in what ways it was different to Swing and to see if it was worth using, and again found the UI thread idea, and I am sure that these 3 are not the only toolkits to use this model. I was wondering if there is a reason for this design i.e. what is the reason for keeping UI modifications isolated to a single thread. I can see why some modifications may cause issues (like modifying a list while it is being drawn), but I do not see why these concerns pass on to the user of the API. Is there a limit imposed by an operating system? Is there a good reason these concerns are not 'hidden' (i.e. some form of synchronization that is invisible to the user)? Is there any (even purely conceptual) way of creating a thread safe graphics library, or is such a thing actually impossible? I found this http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness which seems to describe GTK differently to how I understood it (although my understanding was the same as many people's) How does this differ to other toolkits? Is it possible to implement this in Swing (as the EDT model does not actually prevent access from other threads, it just often leads to Exceptions)

    Read the article

  • Is it recommended to use more than one language at a startup?

    - by GoofyBall
    I work for a mobile startup where, for historical reasons, our chosen language was C#. I was recently assigned to a small project to build a tool that would be used by us internally. When I explained my intention to use Python to build this tool I was heavily criticized for this because introducing new languages, and technologies (Debian, Apache, Python and Django) into our ecosystem would make it harder for other developers to maintain (because only two other people know more than one language besides C#). I countered that this project would take far longer to develop in C# (which I think is an inherent problem with the language/.NET framework) and that the project was small and designed to solve a very particular problem. Of course it is necessary that the ecosystem be as a homogeneous as possible but if your are developing tooling, infrastructure, and internal systems when there are better things to build them with than C# then you should consider using them. By using one language you exclude a lot of other great libraries and frameworks out there, and this case it was the difference between taking one week to build in Python as opposed to a month in C#. Do you think it is acceptable to understand and use only only one language at a startup or even a larger company? Am I perhaps being naive??

    Read the article

  • ASP.NET Session Management

    - by geekrutherford
    Great article (a little old but still relevant) about the inner workings of session management in ASP.NET: Underpinnings of the Session State Management Implementation in ASP.NET.   Using StateServer and the BinaryFormatter serialization occuring caused me quite the headache over the last few days. Curiously, it appears the w3wp.exe process actually consumes more memory when utilizing StateServer and storing somewhat large and complex data types in session.   Users began experiencing Out Of Memory exceptions in the production environment. Looking at the stack trace it related to serialization using the BinaryFormatter. Using remote debugging against our QA server I noted that the code in the application functioned without issue. The exception occured outside the context of the application itself when the request had completed and the web server was trying to serialize session state into the StateServer.   The short term solution is switching back to the InProc method. Thus far this has proven to consume considerably less memory and has caused no issues. Long term the complex object stored in session will be off-loaded into a web service used to access the information directly from the database outside the context of the object used to encapsulate it.

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • What triggered the popularity of lambda functions in modern mainstream programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.

    Read the article

  • What is Stackify?

    - by Matt Watson
    You have developers, applications, and servers. Stackify makes sure that they are all working efficiently. Our mission is to give developers the integrated tools they need to better troubleshoot and monitor the applications they create and the servers that they run on. Traditional IT operations tools are designed for network and system administrators. Developers commonly spend 30% of their time working with IT Operations remediating application service problems. Developers currently lack tools to efficiently support the applications they create. Stackify delivers the application support functionality that developers need:View application deployment locations, versions, and historyBrowse files on servers to ensure proper deploymentsAccess configuration and log files on serversRemotely restart windows services, scheduled tasks, and web applicationsBasic server monitoring and alertsCollects all application exceptions to a centralized pointLog and report on custom applications eventsStackify is building an integrated DevOps solution delivered from the cloud designed to meet the needs of developers but also help unify the working relationship with IT operations teams and existing security roles. Our goal is to help unify the interaction between developers and IT operations. Stackify allows both teams to have visibility that they never had before  to solve complex application service issues easier and faster. Stackify’s CEO and CTO both have experience managing very large and high growth software development teams. That experience is driving our design in Stackify to deliver the integrated tools we always wished we had, the next generation of development operations tools.

    Read the article

  • Purchase Vouchers From A Reputable Source

    - by Harold Green
    We have seen a recent increase in counterfeit vouchers being marketed online and we want to make sure our Candidates are aware of the risks of purchasing vouchers from unauthorized sellers. Please be advised that only Oracle University and Oracle University authorized resellers may sell vouchers for Oracle Certification exams. If you purchase a voucher from any other source, your voucher may not be valid and you run the risk of program sanctions from Oracle which could include a lifetime ban on taking Oracle Certification exams. Be sure your voucher is from an authorized source: Oracle University Oracle Authorized Reseller If you are unsure whether your voucher seller is an Authorized Reseller: Call Oracle University to confirm. Check for the official Oracle Reseller logo on the website. Ebay, Craigslist, etc are not authorized resale avenues. The only exceptions to the above sources are vouchers from programs that provide a discount on exams, or vouchers from your employer who has purchased them through their partner program or with learning credits. These vouchers may not purchased by you, but may be provided to you from: Oracle Academy Oracle Workforce Development Partner Your employer who has purchased vouchers directly from Oracle  This investment is too important to trust to chance. Be sure that you are purchasing your voucher from a reputable source so that you can free your mind to prepare for your exam. View the full Oracle Certification Exam Voucher Use Policy.

    Read the article

  • Is a disk/ata timeout exception dangerous?

    - by j-g-faustus
    I have a few hard drives in mdadm RAID 5 configured to go to standby after a few minutes of inactivity. (Using hdparm.conf spindown_time.) At irregular intervals I get messages like these in dmesg: [ 1840.251661] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 1840.251722] ata4.00: failed command: SMART [ 1840.251758] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [ 1840.251759] res 40/00:14:50:2e:04/00:00:02:00:00/40 Emask 0x4 (timeout) [ 1840.251858] ata4.00: status: { DRDY } [ 1840.251888] ata4: hard resetting link [ 1840.600742] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1840.601521] ata4.00: configured for UDMA/133 [ 1840.601547] ata4: EH complete [337877.713988] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [337877.714019] ata4.00: failed command: SMART [337877.714038] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [337877.714039] res 40/00:04:90:10:81/00:00:00:00:00/40 Emask 0x4 (timeout) [337877.714089] ata4.00: status: { DRDY } [337877.714107] ata4: hard resetting link [337878.063085] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [337878.063743] ata4.00: configured for UDMA/133 [337878.063764] ata4: EH complete I think the exception is caused by smartd when a drive does not wake up quickly enough. There are no issues (that I can tell) in accessing the drives normally through the file system - it takes a few seconds longer than normal when they are asleep, but there are no exceptions. Is this something I should worry about, as a potential symptom on something that could corrupt a drive over time? Or can I safely ignore it as part of normal operation? Edit: By request: smartctl -a for sdaand sde, both disks are members of the array. If ata4is the same as scsi-4 then sde is the one that gave the error above, according to /dev/disk/by-path.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2, 2012

    - by Bob Rhubart
    ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI, " reports Oracle ACE Director Andrejus Baranovskis. Big Data: Running out of Metric System | Andrew McAfee Do very large numbers make your brain hurt? Better stock up on aspirin. According to Andrew McAfee: "It seems safe to say that before the current decade is out we’ll need to convene a 20th conference to come up with some more prefixes for extraordinarily large quantities not to describe intergalactic distances or the amount of energy released by nuclear reactions, but to capture the amount of digital data in the world." Cloud computing will save us from the zombie apocalypse | Cloud Computing - InfoWorld "It's just a matter of time before we migrate our existing IT assets to public cloud systems," says InfoWorld cloud blogger David Linthicum. "Additionally, it's a short window until the dead rise from the grave and attempt to eat our brains." Is is Halloween or something? Thought for the Day "A computer lets you make more mistakes faster than any invention in human history—with the possible exceptions of hand guns and tequila." — Mitch Ratcliffe

    Read the article

  • Technical Integration Roadmap for OBI11g and Oracle Hyperion EPM System

    - by Mike.Hallett(at)Oracle-BI&EPM
    There is an excellent technical whitepaper on the integration roadmap for Oracle business intelligence enterprise edition and the Oracle Hyperion enterprise performance management system  (download at this link).  This document lists the integration points among all current releases of Oracle BI EE with EPM System releases: with live links to other relevant documentation also provided. You may also be interested in the overall Hyperion EPM System Documentation Resources which can be found from the Doc Portal. And, there are two new tools for EPM @ MyOracleSupport  {this needs your oracle logon} : Cumulative Feature Overview Tool This new tool offers a simple way to determine the features developed between releases to assist you in your upgrade implementations. The tool helps you to plan your upgrades by providing concise descriptions of new and enhanced solutions and functionality that are added between your current and target releases. With the Cumulative Feature Overview Tool, you can quickly and easily find information about new features for each EPM System product. Defects Fixed Finder Tool This new tool provides an efficient way to review the defects fixed in patch set updates, patch set exceptions, and patch sets for major releases, starting with Release 11.1.1. The tool helps you plan patch implementations by providing concise descriptions of defects fixed after your current release. The Defects Fixed Finder enables you to easily find information about defects fixed for each EPM System product.

    Read the article

  • How to break the "php is a bad language" paradigm? [closed]

    - by dukeofgaming
    PHP is not a bad language (or at least not as bad as some may suggest). I had teachers that didn't even know PHP was object oriented until I told them. I've had clients that immediately distrust us when we say we are PHP developers and question us for not using chic languages and frameworks such as Django or RoR, or "enterprise and solid" languages such as Java and ASP.NET. Facebook is built on PHP. There are plenty of solid projects that power the web like Joomla and Drupal that are used in the enterprise and governments. There are frameworks and libraries that have some of the best architectures I've seen across all languages (Symfony 2, Doctrine). PHP has the best documentation I've seen and a big community of professionals. PHP has advanced OO features such as reflection, interfaces, let alone that PHP now supports horizontal reuse natively and cleanly through traits. There are bad programmers and script kiddies that give PHP a bad reputation, but power the PHP community at the same time, and because it is so easy to get stuff done PHP you can often do things the wrong way, granted, but why blame the language?. Now, to boil this down to an actual answerable question: what would be a good and solid and short and sweet argument to avoid being frowned upon and stop prejudice in one fell swoop and defend your honor when you say you are a PHP developer?. (free cookie with teh whipped cream to those with empirical evidence of convincing someone —client or other— on the spot) P.S.: We use Symfony, and the code ends being beautiful and maintainable

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Strategies for managUse of types in Python

    - by dave
    I'm a long time programmer in C# but have been coding in Python for the past year. One of the big hurdles for me was the lack of type definitions for variables and parameters. Whereas I totally get the idea of duck typing, I do find it frustrating that I can't tell the type of a variable just by looking at it. This is an issue when you look at someone else's code where they've used ambiguous names for method parameters (see edit below). In a few cases, I've added asserts to ensure parameters comply with an expected type but this goes against the whole duck typing thing. On some methods, I'll document the expected type of parameters (eg: list of user objects), but even this seems to go against the idea of just using an object and let the runtime deal with exceptions. What strategies do you use to avoid typing problems in Python? Edit: Example of the parameter naming issues: If our code base we have a task object (ORM object) and a task_obj object (higher level object that embeds a task). Needless to say, many methods accept a parameter named 'task'. The method might expect a task or a task_obj or some other construct such as a dictionary of task properties - it is not clear. It is them up to be to look at how that parameter is used in order to work out what the method expects.

    Read the article

  • How do I get Graphics drivers / bluetooth / card reader working on an Acer Aspire V3-571G?

    - by Adam
    A couple of days ago I bought an Acer Aspire V3-571G laptop without a system installed on it. The only thing that was there was Linux Linpus. I created a bootable CD with Ubuntu 12.04 64-bit - I read that my processor was 64 bit and that it might be a good configuration for my gear (I'm not especially fluent with all the computer stuff, still trying to learn) and replaced Linpus with Ubuntu. Everything seemed to work fine, but there're few exceptions to that which came pass my way. My bluetooth doesn't work. It seems to be switched on, but when I check my system settings the button is actually off, and I can't drag it 'perminently' to the 'on' position. Tried a couple of commands I found on the net, none of them helped and there was no word whatsoever in my BIOS settings about enabling bluetooth. My card reader has some serious problems with copying more than one file at a time. I tried to put some music on my phone through a MicroSD card adapter (because my bluetooth doesn't work) and it got stuck every single time I copied an album on it. I'm not sure if all my drivers were properly installed, so I checked in the terminal if it could tell me sth about my graphics. typed: sudo lshw -c display and what i got was: *-display UNCLAIMED description: VGA compatible controller product: NVIDIA Corporation vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller cap_list configuration: latency=0 resources: memory:b2000000-b2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:2000(size=128) *-display description: VGA compatible controller product: Ivy Bridge Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:b3000000-b33fffff memory:c0000000-cfffffff ioport:3000(size=64) As I said I'm no expert and not english-speaking generally, but it doesn't seem to be right. I've got a NVIDIA GeForce GT 640M.

    Read the article

  • Writing Web "server less" applications

    - by crodjer
    TL;DR What are the prospects of write applications which are completely based on a REST database server (CouchDB) and web applications which directly access the DB instead of having a web server in between? I recently started looking up some NoSQL databases. MongoDB seems to be a popular choices. I also liked the project. But I personally liked the REST interface of CouchDB. So what I wanted to know is if there was the possibility of applications (maybe cached apps in web browser, a chrome extension etc.) which could just just query the database directly with no requirement of a webserver in between. All the computational logic would reside in the client application and the database will do what it does, CRUD. Since mostly (I don't know which doesn't) client frameworks support REST quaries, it could be a good way writing applications well optimized for respective framework. These applications though won't be doing complicated computation, but still provide enough functionality which could replace lots of conventional applications. Are existing resources and projects which would help me move towards writing such applications and also the scope and moving towards developing in this way? Are their any technical/security issues with this? This post will help me decide to look into project like CouchDB (and maybe Dive into Erlang later) or stay with the conventional frameworks (like django) and SQL databases. Update A specific point of such apps I had in mind is creation of offline applications just by replicating couchdb data on client.

    Read the article

  • Should I expect my team to have more than a basic proficiency with our source control system?

    - by Joshua Smith
    My company switched from Subversion to Git about three months ago. We had weeks of advance notice prior to the switch. Since I'd never used Git before (or any other DVCS), I read Pro Git and spent a little time spinning up my own repositories and playing around, so that when we switched I'd be able to keep working with minimal pain. Now I'm the 'Git guy' by default. With a couple of exceptions, most of my team still has no idea how Git works. For example, they still think of branches as complete copies of the source code, and even go so far as to clone the repo into multiple folders (one per branch). They generally look at Git as a scary black box. Given the fundamental nature of source control in our daily work (not to mention the ridiculous amount of power Git affords us), I'm of the opinion that any dev who doesn't achieve a certain level of proficiency with it is a liability. Should I expect my team to have at least some understanding of how Git works internally, and how to use it beyond the most basic pull/merge/push operations? Or am I just making something out of nothing?

    Read the article

  • Suggestions for a Live chat software on websites for customer support?

    - by Munish Goyal
    Recommendations needed. We want to get in touch with customers via live chat. Requirements: chat window customisable to mingle with website theme (colors etc) preferably the window should be within webpage and not only pop-out/popup. ease of use by customer minimally intrusive should have triggers/Alerts to backend side. for ex: user is unable to fill-up signup form or something, we should be able to offer help to user and this chat window automatically shows to user. What is the cost ? UPDATE: After R&D we also narrowed down to comm100 and liveperson, and we will go with comm100. LP is best commerical soln. but comm100 is a good free soln. We go with comm100 as starting point. But it gives out exceptions alerts sometimes in the dashboard. Can it be configured for chat invitations triggered on certain conditions. Currently only a timebased trigger is available. Any other major difference between these two ?

    Read the article

  • Is it bad to be the only person supporting software you have developed?

    - by trpt4him
    My employer has a need for a web-based application to manage and share data within the department, with approximately 50-75 possible users. I feel I have the ability to write it for them. I would likely use Python/Django with a MySQL database, so it would be open source. However, I'm the only IT person in my department (our larger organization has a separate IT support staff with which I often work, but not for web development). I want to develop this application, but if I leave in 1-2 years, and someone else has to come in after me and support it, will this be seen as a bad decision? This is assuming all the obvious points -- I will write documentation, I will comment my code, and I will strive to follow good application design principles. But will that be enough? In principle, is it acceptable for one person to develop and support an entire web application? Is this a "do first, then show and ask" kind of situation, or should I be certain it will be adopted by everyone involved first?

    Read the article

  • Remote Access to Owncloud Server

    - by John
    I'm currently trying to setup my own own-cloud server, and I've got it fully installed, configured, and accessible from within my own local network. I cannot figure out how to access it from the outside. So far I've: Successfully setup port-forwarding on my local router. I've done so via 'single port forwarding' and 'port range forwarding' Ports 80, 443, 3306 (Apache-Full and MySQL) Successfully obtained my external IP address. I've also tested this magic number from within the network at #insertIPhere/owncloud and it did work. Successfully setup the server using SQLite Successfully setup the server using MySQL Created the following exceptions in my firewall: Allow In Port 80 (Apache Full) Allow In Port 443 (Apache Full) Allow In Port 3306 (MySQL) Tried connecting from several different remote networks, as to troubleshoot something on their end As far as trying to access it, I'm doing so through Google-Chrome and Mozilla Firefox trying to reach the server through #insertIPhere/owncloud using the above public IP address. So what have I missed, and how do I access my server from outside? Thanks in advance for your help and time, and I apologize in advance for what will probably result in my noobish mistake in networking. I've looked at the official documentation. And also this question here.

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • What kind of hosting do I need?

    - by Robert Smith
    I migrated this question from serverfault. Hopefully this is the appropriate place. I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have a custom made forum (rather than a built-in forum like the ones you can find in plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. By the way, I was told that Linodo is not all that different to Amazon EC2 but this website is supposed to work 24/7, so I can't take advantage of Linodo's flexibility regarding creating and deleting instances. Thanks in advance.

    Read the article

  • Release: Oracle Java Development Kit 8, Update 20

    - by Tori Wieldt
    Java Development Kit 8, Update 20 (JDK 8u20) is now available. This latest release of the Java Platform continues to improve upon the significant advances made in the JDK 8 release with new features, security and performance optimizations. These include: new enterprise-focused administration features available in Oracle Java SE Advanced; products offering greater control of Java version compatibility; security updates; and a very useful new feature, the MSI compatible installer. Download Release Notes Java SE 8 Documentation New tools, features and enhancements highlighted from JDK 8 Update 20 are: Advanced Management Console The Java Advanced Management Console 1.0 (AMC) is available for use with the Oracle Java SE Advanced products. AMC employs the Deployment Rule Set (DRS) security feature, along with other functionality, to give system administrators greater and easier control in managing Java version compatibility and security updates for desktops within their enterprise and for ISVs with Java-based applications and solutions. MSI Enterprise JRE Installer Available for Windows 64 and 32 bit systems in the Oracle Java SE Advanced products, the MSI compatible installer enables system administrators to provide automated, consistent installation of the JRE across all desktops in the enterprise, free of user interaction requirements. Performance: String de-duplication resulting in a reduced footprint Improved support in G1 Garbage Collection for long running apps. A new 'force' feature in DRS (Deployment Rule Set) which allows system administrators to specify the JRE with which an applet or Java Web Start application will run. This is useful for legacy applications so end users don't need to approve security exceptions to run.  Java Mission Control 5.4 with new ease-of-use enhancements and launcher integration with Eclipse 4.4 JavaFX on ARM Nashorn performance improvement by persisting bytecode after inital compilation There's much more information to be found in the JDK 8u20 Release Notes.

    Read the article

  • Designing business objects, and gui actions

    - by fozz
    Developing a product ordering system using Java SE 6. The previous implementations used combo boxes, text fields, and check boxes. Preforming validation on action events from the GUI. The validation includes limiting existing combo boxes items, or even availability. The issue in the old system was that the action was received and all rules were applied to the entire business object. This resulted in a huge event change as options were changed multiple times. To be honest I have no idea how an infinite loop wasn't produced. Through the next iteration I stepped in and attempted to limit the chaos by controlling the order in which the selections could be made. Making configuration of BO's a top down approach. I implemented custom box models, action events, beans/binding, and an MVC pattern. However I still am unable to fully isolate action even chains. I'm thinking that I've approached the whole concept backwards in an attempt to stay closest to what was already in place. So the question becomes what do I design instead? I'm currently considering an implementation of Interfaces, Beans, Property Change Listeners to manage the back and forth. Other thoughts were validation exceptions, dynamic proxies.... I'm sure there are a ton of different ways. To say that one way is right is crazy, and I'm sure it will take a blending of multiple patterns. My knowledge of swing/awt validation is limited, previously I did backend logic only. Other considerations are were some sort of binding(jgoodies or otherwise) to directly bind GUI state to BO's.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >