Search Results

Search found 1768 results on 71 pages for 'meet'.

Page 49/71 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Legal concerns with orchestrating a music submission contest

    - by Amplify91
    My team and I are getting pretty far along in the development of our latest game and have been thinking about audio. We decided to host an audio submission contest where we will offer a little cash and some equity stake in the game as prizes. We are also giving away copies of the game to participants. We hope not only to find audio for our game, but to meet some cool sound artists and promote the game a bit through the process. First of all, is this even a good idea? What are some potential dangers in doing this? Will it even be well received among artists? Secondly, I wrote up some Terms and Conditions in my best legal-speak to try to protect us and clarify how the contest will be run. Are these sufficient to make sure everyone involved is treated fairly and is legally protected? They are as follows: All submissions (The Submission) must be licensed under a Creative Commons Attribution 3.0 Unported License (CC-BY-3.0) By applying a CC-BY-3.0 license, you (The Submitter) expressly give Detour Games (and all members wherein) permission to copy, distribute, transmit, modify, adapt, and make commercial use of The Submission. The Submitter must own all rights to The Submission and be within their rights to license it as specified and submit it. The Submitter claims responsibility for the legality of The Submission. If The Submission is found to infringe on the rights of a person or entity other than those of The Submitter, Detour Games will not be held liable as all responsibility and liability for the legality of The Submission is that of The Submitter's. No more than two free copies of The Game per submitter. All flat cash prizes will only be disbursed pending the success of our first $5,000 Kickstarter campaign. These prizes will be disbursed 30 days after Detour Games receives the Kickstarter funds. All equity prizes (percentage of profits) are defined as the given percent of total profits after costs for a period of one year (12 months) after the release of RAW. These prizes will be disbursed semi-annually. All prize money will be disbursed through either an electronic fund transfer through a service such as PayPal or by a mailed money order. It is The Submitter's responsibility to cooperate with Detour Games in the disbursement of the funds. Detour Games reserves the right to change these Terms and Conditions at any time without notice. By participating in the contest, The Submitter agrees to and accepts all terms and conditions listed. What else could I do (legally) to protect everyone involved?

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

  • What is the standard term for my role?

    - by sigil
    I'm doing work that involves writing code and managing developers in a "special projects" division of a large company. I'd like to define my role better and figure out if there's an industry standard term for what I do, so that it will be easier for me to research best practices and work on a career path What I do all day: A macro that connects an Excel sheet to an Access database is acting funny; I get called in to figure out what's happening and debug it. Someone needs data extracted from a bunch of files on Sharepoint. I figure out a client-side solution because I'm not authorized to do anything server-side and getting IT to do anything would take several months and need a business case. A manager wants a new data entry tool for their team. I interview the manager and team members to work out the functional requirements, then design/develop/test the application. Someone needs a VBA script to crunch some data for their presentation that's due in two hours. I drop everything I'm doing to hack out a quick script and run the analysis, without much in the way of testing. A developer has been hired to build a database for one of the teams, since I'm working on too many different things and don't have time to take this project on in the timeframe required. I direct his work and push him to meet certain deadlines, interview stakeholders to get more info that will help him figure out how to build the necessary forms, and modify the functional requirements of the database to fit in the timeframe. Someone wants to load a set of data into a GIS system and set up an ongoing refresh and reporting of this data set. I facilitate the conversation between the GIS developers and the owners of this data set, and design a demo application as proof of concept. It's kind of an "all-purpose programming and IT management" position, but it's not officially IT because the company has an actual IT department with a rigorously defined system of submitting requests, developing code, and managing projects. What I do, I guess, is more of a handyman job, where stuff falls to me because I'm the geekiest one in the room. Is there a standard term in the software world for what I do?

    Read the article

  • Trying to find resources to learn how to test software [closed]

    - by Davek804
    First off, yes this is a general question, and I'd be perfectly happy to move this to another portion of SE, but I didn't see a more fitting sub. Basically, I am hoping a more experienced QA tester can come along and really fill in some basics for me. So far, websites seem to be sparse in terms of explaining languages involved, basic practices, etc. So, I'm sorry in advance if this is too general, but towards the end of this post I ask some specific questions if it's just absolutely unacceptable to speak in general terms. I just landed a position as Junior Systems and QA Engineer with a social media startup. Their QA and testing is almost nonexistent, so if I do a good job, I imagine I'll find a lot of bugs and have a secure role in the business. I'm pretty good with the systems aspect of my role, but I need to learn more about the QA and testing aspects. We run hardware that's touchscreen based - the user can use and interact with the devices. So, in terms of my QA role, in the short term, I need to build scripts to test the hardware/software as a 'user' to try to uncover bugs. First off, what language should these scripts be written in? Does anyone have some examples? What about the longer term 'automated testing'? I'm familiar with regression testing as the developer adds in new features, sure, but the 50,000 other types of testing, not so much. Most of our hardware runs dotnet/C# code, with some of the servers running Java - but I don't expect to need to run tests on the Java side at this point. I hope to meet with one developer today and try to get a good idea of the output from the hardware so that I can 'mock' this data that gets sent to servers, to try to bugtest. Eventually, we will be moving the hardware to be closer to where I live and work, so that I can test virtually and on real hardware. So a lot of the bugs we're dealing with now are like this: the Local Server, which kiosks report their data to gets updated from the kiosks, but the remote server does not. Or, vis versa when the user registers on a kiosk, the remote server updates but the local server does not. But yeah, without much more detail, I imagine a lot of this info isn't helpful. I've bought a book "How Google Tests Software", but it's really a book more about 'how their software testing is different from Microsoft'. It doesn't teach how to test so much as why their methods are better. Does anyone have a good book that I can buy? An ebook maybe? My local Barnes and Noble kinda had a terrible selection. I also figure a book from 2005 is not necessarily that good either.

    Read the article

  • Managing scalability and availability with two servers running Apache Httpd, Apache Mina and MySQL

    - by celalo
    Hello, I am not a developer and I don't much experience with scalable server architectures. But I am in need of a highly available and scalable system for one of my projects. There is going to be two servers I am going to use for the time being. Both with 4 core CPUs and 8 GB RAM with RAID structures running CentOS 5.4. I will also have feature called "Failover IP" which enables to direct an IP address to another server within short time. The applications which will be run on the servers: There is going to be a Java application based on Apache Mina server for handling TCP requests from some hundreds of network devices where the devices are going to send request as much as one request per minute. Handling those requests, includes parsing the requests and inserting a few rows to the Database. Parsing requests before inserting data to the DB does take neglectable time. There is going to be MySQL server, as I stated above. Also there is going to be a PHP web application running on Apache Httpd Server which uses the same DB with the Java application. What I wish to have is to make use of those two servers at the most. I was imagining to have the servers identical, sharing the work load. MySQL could be a cluster maybe? And if some application fails or the whole machine goes down, the other will continue serving the requests seamlessly. Reminding that a "Failover IP" feature will be available for me to take advantage of. Also, It should be kept in mind that number of servers could increase in time, to meet the demand. What can you suggest? Which kind of tools I can make use of? Which kind of monitoring software (paid/unpaid) I have? Thanks in advance.

    Read the article

  • Cisco ASA Act as a Hardware Security Module?

    - by Derek
    Hello, We have a partner that is requiring us to get a HSM for a web application that we host for them. This is something new for us, we've always installed our SSL certificates on our web servers and never needed a hardware device. We currently have 2 Cisco ASA 5510 firewalls in an active/standby configuration. Both ASAs have a ASA-SSM-10 security module installed in them. The web application is a standard HTTPS webpage with no authentication required. I was wondering if we could use our Cisco ASAs to meet this requirement or if we'll have to buy another device. I was doing some searching and read about Cisco's clientless webvpn feature. It sounds like it might work, but I'm not sure. We basically want the ASA to handle the SSL and proxy the connection to our web servers. We do not want to prompt for a username or password to connect or show any portals, just display the web page. If the ASA cannot do this, does any one have any recommendations for network attached hardware security modules? We are using VMware vCenter, so we'd rather have an external device attached to the network, rather than buying HSM cards for every ESXi host. Thanks, Derek

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • Creating Windows 8.1 system image error

    - by Random
    I'm experiencing "not enough space" error when trying to create system image to a USB hard drive: Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. Blah, blah... There is not enough disk space to create the volume shadow copy on the storage location. Make sure that, for all volumes to be backup up, the minimum required disk space for shadow copy creation is available. This applies to both the backup storage destination and volumes included in the backup. Minimum requirement: For volumes less than 500 megabytes, the minimum is 50 megabytes of free space. For volumes more than 500 megabytes, the minimum is 320 megabytes of free space. Recommended: At least 1 gigabyte of free disk space on each volume if volume size is more than 1 gigabyte. ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. I'd tried both - PowerShell wbAdmin start backup -backupTarget:E: -include:C: -allCritical -quiet and via Control Panel - File History button Clearly both EFI and Windows Recovery Environment partitions don't meet requirements coming from System Image tool (pic below) On top of that all system partitions are now shown as 100% free in Disk Management, it's disturbing but far from the actual state. My question is - hot to create System Image in Windows 8.1?

    Read the article

  • What is the collaborative screen shot/diagramming application recently featured on Hacker News and p

    - by wonsungi
    A few days ago, I saw this video for a screen capture application. I'm pretty sure I followed a link from Hacker News, possibly to a Life Hacker article. The video was very short, but demonstrated how the application could be used: The application was basically a movable/resize-able view port with a button. When the button is pressed, the contents of the view port are saved to an image (basically a screen capture.) The interesting thing is what you could do after that point. One of the specific examples from the video browsed to Google maps street view, grabbed a photo of an intersection, then scribbled notes about where to meet and where the restaurant was in colored "marker." Another example shown was grabbing a house layout from from CAD tool, then scribbling notes on it. The last part of the video showed several possible uses being scrolled through the application's view port. Now, it seemed it was very easy to share these images with other people because there was some type of integration, either with their own site and/or common social websites/chat services. The application was shown running on both Windows and Mac. edit: I think there was an iPhone app, as well. Anyone know what this application is? I tried searching Google, Hacker News, and Life Hacker already. It is not Jing.

    Read the article

  • Password Policy seems to be ignored for new Domain on Windows Server 2008 R2

    - by Earl Sven
    I have set up a new Windows Server 2008 R2 domain controller, and have attempted to configure the Default Domain Policy to permit all types of passwords. When I want to create a new user (just a normal user) in the Domain Users and Computers application, I am prevented from doing so because of password complexity/length reasons. The password policy options configured in the Default Domain Policy are not defined in the Default Domain Controllers Policy, but having run the Group Policy Modelling Wizard these settings do not appear to be set for the Domain Controllers OU, should they not be inherited from the Default Domain policy? Additionally, if I link the Default Domain policy to the Domain Controllers OU, the Group Policy Modelling Wizard indicates the expected values for complexity etc, but I still cannot create a new user with my desired password. The domain is running at the Windows Server 2008 R2 functional level. Any thoughts? Thanks! Update: Here is the "Account policy/Password policy" Section from the GPM Wizard: Policy Value Winning GPO Enforce password history 0 Passwords Remembered Default Domain Policy Maximum password age 0 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 0 characters Default Domain Policy Passwords must meet complexity Disabled Default Domain Policy These results were taken from running the GPM Wizard at the Domain Controllers OU. I have typed them out by hand as the system I am working on is standalone, this is why the table is not exactly the wording from the Wizard. Are there any other policies that could override the above? Thanks!

    Read the article

  • Installing Silverstripe on 000webhost.com (free web host)

    - by benwad
    Hi I'm trying to learn how to work Silverstripe so I extracted the tar file to my free hosting account. I then went on install.php and edited the permissions to meet the requirements set out in install.php but I still get two warnings from the 'webserver configuration' section: I can't tell what webserver you are running. Without Apache I can't tell if mod_rewrite is enabled. I can't tell whether mod_rewrite is running. You may need to configure a rewriting rule yourself. I looked in phpinfo() and mod_rewrite appears to be installed. I contacted the web host and they said it was to do with virtual directory paths, and I should add 'RewriteBase /' to the top of my .htaccess file in the public_html directory. However I did this and still had the same problem. The install.php script says that I can install it even with these warnings but when I press 'install' it just refreshes the install.php page. It doesn't even overwrite the .htaccess file. 000webhost.com says they have successfully installed Silverstripe on their user accounts without much configuration but I can't seem to find out how. EDIT: I managed to get to the next page but now there is another warning which is stopping it installing: Friendly URLs are not working. This is most likely because mod_rewrite isn't configuredcorrectly on your site. Please check the following things in your Apache configuration; you may need to get your web host or server administrator to do this for you: * mod_rewrite is enabled * AllowOverride All is set for your directory I also get this error message from the server: Warning: unlink(mysite/_config.php) [function.unlink]: Permission denied in /home/a2716553/public_html/install.php on line 701

    Read the article

  • Free web-based software for team collaboration/documentation

    - by Jason Antman
    Looking for some advice here, as my search has turned up to be pretty fruitless. My group (9 people - SAs, programmers, and two network guys) is looking for some sort of web tool to... ahem... "facilitate increased collaboration" (we didn't use a buzzword generator, I swear). At the moment, we have an unified ticketing system that's braindead, but is here to stay for political/logistical reasons. We've got 2 wikis ("old" and "new"), neither of which fulfill our needs, and are therefore not used very often. We're looking for a free (as in both cost and open source) web-based tool. Management side: Wants to be able to track project status, who's doing what, whether deadlines are being met, etc. Doesn't want full-fledged "project management" app, just something where we can update "yeah this was done" or "waiting for Bob to configure the widgets". TeamBox (www.teambox.com) was suggested, but it seems almost too gimmicky, and doesn't meet any of the other requirements: Non-management side: - flexible, powerful wiki for all documentation (i.e. includes good tables, easy markup, syntax highlighting, etc.) - good full text search of everything (i.e. type in a hostname and get every instance anyone ever uttered that name) - task lists or ToDo lists, hopefully about to be grouped into a number of "projects" - file uploads - RSS or Atom feeds, email alerts of updates We're open to doing some customizations (adding some features, notification/feeds, searching, SVN integration, etc.) but need something F/OSS that will run under Apache. My conundrum is that most of the choices I've found so far fall into one of these categories: project management/task tracking with poor wiki/documentation/knowledge base support wiki with no task tracking support ticketing system with everything else bolted on (we already have one that we're stuck with) code-centric application (we do little "development", mostly SA work) Any suggestions? Or, lacking that, any comments on which software would be easiest to add the lacking features to (hopefully ending up with something that actually looks good and works well)?

    Read the article

  • Installing SilverStripe on 000webhost.com (Free web host)?

    - by benwad
    Hi I'm trying to learn how to work Silverstripe so I extracted the tar file to my free hosting account. I then went on install.php and edited the permissions to meet the requirements set out in install.php but I still get two warnings from the 'webserver configuration' section: I can't tell what webserver you are running. Without Apache I can't tell if mod_rewrite is enabled. I can't tell whether mod_rewrite is running. You may need to configure a rewriting rule yourself. I looked in phpinfo() and mod_rewrite appears to be installed. I contacted the web host and they said it was to do with virtual directory paths, and I should add 'RewriteBase /' to the top of my .htaccess file in the public_html directory. However I did this and still had the same problem. The install.php script says that I can install it even with these warnings but when I press 'install' it brings me to a page with the following errors: Friendly URLs are not working. This is most likely because mod_rewrite isn't configuredcorrectly on your site. Please check the following things in your Apache configuration; you may need to get your web host or server administrator to do this for you: * mod_rewrite is enabled * AllowOverride All is set for your directory I also get this error message from the server: Warning: unlink(mysite/_config.php) [function.unlink]: Permission denied in /home/a2716553/public_html/install.php on line 701 000webhost.com says they have successfully installed Silverstripe on their user accounts without much configuration but I can't seem to find out how.

    Read the article

  • How to manually start and re-start Apache with mod_wsgi powering a password protected Python WSGI app?

    - by Mahmoud Abdelkader
    I'm working on a project where I have to meet some regulatory requirements that require at least 3 out of 5 authorized users to start a backend web service that handles very sensitive information using pre-assigned passwords. Right now, the prototype has been approved and is running using Python's wsgiref.simple_server(), which I have programmed to manually prompt for the passwords. Now that the prototype has been approved, I have to migrate the web application to a production environment where I will need to run it behind Apache and mod_wsgi. I have two questions: Right now, I use a thin Python wrapper around expect to programmatically allow for remote password entry. How do I get Apache to prompt me for a password before starting? Will this have to be in the app.wsgi script that's executed by mod_wsgi? How would that work since Apache daemonizes, and thus, has no stdin! Will I have to worry about some type of code reload? Apache probably has some maximum number of requests before it kills and restarts another worker process, but, would this require a password prompt as well?

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • AD User Passwords expiring without any notifications?

    - by scooter133
    We setup password Policies in Active Directory to Expire peoples passwords after so many days. Well it looks like the time has come for the Expiration of the Passwords and people are getting locked out... There has been no warning of user passwords about to expire. They just come in to work and they cannot log in, the phones no longer connect, nothing. Reset the password and all is good. Some of the users are locked out, though most are not, they just cannot log in. On setting the password Expiration, I didn't see anything about nor warning the users of the impending expiration. Seems like it used to warn you 15 days or so before it would expire. Clients range from: WinXP, WinVista, Win7 and Server 2008R2 Remote Desktop Services. How can I make sure my users are warned of the Expiration? Resultant Set of Policy for User that was not prompted: Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 10 passwords remembered Default Domain Policy Maximum password age 270 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 4 characters Default Domain Policy Password must meet complexity requirements Disabled Default Domain Policy Store passwords using reversible encryption Disabled Default Domain Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 20 minutes Default Domain Policy Account lockout threshold 5 invalid logon attempts Default Domain Policy Reset account lockout counter after 15 minutes Default Domain Policy Local Policies/Audit Policy Policy Setting Winning GPO Audit account logon events Failure Default Domain Policy Audit account management Success, Failure Default Domain Policy Audit directory service access Success, Failure Default Domain Policy Audit logon events Failure Default Domain Policy Audit policy change Success, Failure Default Domain Policy Audit privilege use Failure Default Domain Policy Local Policies/Security Options Interactive Logon Policy Setting Winning GPO Interactive logon: Prompt user to change password before expiration 7 days Default Domain Policy

    Read the article

  • Reasonable automatic HTML to PDF conversion (in UNIX/Linux environment)

    - by Alex Balashov
    Is there a way to generate PDF documents from HTML files automatically in Linux where the PDF offers some kind of reasonable level of resemblance to the input file? A command-line tool - as opposed to an interactive GUI of some kind - is key. I have tried htmldoc and some related cousins, of course. But these tools are hopelessly stone-age; htmldoc doesn't support CSS at all. You won't find a lot of HTML documents these days that don't have at least some CSS styling. I don't really care about stupid effects or minor embellishments, but the issue is that CSS is at the core of most layouts these days; not many folks are using 6 layers of nested tables anymore. So, if the conversion tool has no grasp of CSS whatsoever, it's not just a matter of "the document doesn't look quite right"; it is likely to not meet the minimum standard of usability at all. It has been suggested to me by some folks to try to use the Gecko rendering engine to generate images that can be converted to PDFs, but I have no idea how one would go about doing this, let alone easily. I have no trouble believing that there are good commercial tools that do this, but I'm really looking for an open-source package if possible, as the endeavour itself is an open-source one and doesn't pay. Thanks in advance!

    Read the article

  • Adobe premiere CS5 problem with the display driver

    - by user30179
    This error is really hindering our project. I get an error, it started showing-up June 16th 2010. There are no windows updates at the on the same date as the error, other than (Windows Defender) Seems to happen when working with Image overlays. ERROR: "The NVIDIA OpenGL driver detected a problem with the display driver and is unable to continue. The application must close." We opened the side of the case in the possibility there is an over heating problem. Nvidia Driver ver 8.16.11.9175 (nVidia Quadro FX 1700) I am running: Windows 7 x64 Adobe premiere CS5 Production nVidia Quadro FX 1700 (MRGA14L) 4 Gig ram RAID 10 2 750GB drives Duo core 3.0 6MB L2 Cache This is at least three other people that have come across this error: NVidia Forum EVGA Forum NVidia Forum UPDATE: Having the case open did not help. I also installed New Nvidia drivers now I get a different error: *ERROR:*Your hardware configuration does not meet minimum specifications needed to run the application. The application must close. I ran Windows Update and installed all four updates so now I am waiting to see if the error occurs again. Anything beyond this I am out of options.

    Read the article

  • Windows Task Scheduler fails at sending e-mail

    - by Marki
    The error is 2147746321. I can see in the mailserver log that it tries, but the connection gets closed. Wed 2012-10-10 15:55:25: Session 990590; child 1 Wed 2012-10-10 15:55:25: Accepting SMTP connection from [x:49161] to [y:25] Wed 2012-10-10 15:55:25: --> 220 Mdaemon; Wed, 10 Oct 2012 15:55:25 +0200 Wed 2012-10-10 15:55:25: <-- EHLO x Wed 2012-10-10 15:55:25: --> 250-Hello x, pleased to meet you Wed 2012-10-10 15:55:25: --> 250-VRFY Wed 2012-10-10 15:55:25: --> 250-EXPN Wed 2012-10-10 15:55:25: --> 250-ETRN Wed 2012-10-10 15:55:25: --> 250-AUTH LOGIN Wed 2012-10-10 15:55:25: --> 250-8BITMIME Wed 2012-10-10 15:55:25: --> 250 SIZE 20971000 Wed 2012-10-10 15:55:25: <-- AUTH LOGIN Wed 2012-10-10 15:55:25: --> 334 VX...... Wed 2012-10-10 15:55:25: Connection closed Wed 2012-10-10 15:55:25: SMTP session terminated (Bytes in/out: 26/212) Googling does not reveal much except that it indeed "doesn't work" and Exchange pops up all over the place. This is no Exchange server. I just want a plain and straight SMTP connection to work. How? (I have tried running the task as normal user and as system account, no difference.)

    Read the article

  • Estimating compressed file size using a list parameter

    - by Sai
    I am currently compressing a list of files from a directory in the following format: tar -cvjf test_1.tar.gz -T test_1.lst --no-recursion The above command will compress only those files mentioned in the list. I am doing this because this list is generated such that it fits a DVD. However, during compression the compression rate decreases the estimated file size and there is abundant space left in the DVD. This is something like a Knapsack algorithm. I would like to estimate the compressed file size and add some more files to the list. I found that it is possible to estimate file size using the following command: tar -cjf - Folder/ | wc -c This command does not take a list parameter. Is there a way to estimate compressed file size? I am also looking into options like perl scripts etc. Edit: I think I should provide more information since I have been doing a lot of web search. I came across a perl script(Link)that sort of emulates the Knapsack algorithm. The current problem with the above mentioned script is that it splits the files in their original state. When I compress the files after splitting them, there are opportunities for adding more files which I consider to be inefficient. There are 2 ways I could resolve the inefficiency: a) Compress individual files and save them in a directory using a script. The compressed file could provide a best estimate. I could generate a script using a folder of compressed files and use them on the uncompressed ones. b) Check whether the compressed file's size is less than the required size. If so, I should keep adding files until I meet the requirement. However, the addition of new files to the compressed file is an optimization problem by itself.

    Read the article

  • How do I deliver mail for wildcard addresses to a particular user/alias/program?

    - by David M
    I need to configure sendmail so that mail delivered for wildcard addresses is accepted for delivery and then delivered to a user, alias, or directly to a script. I can rewrite the envelope/headers any number of ways, but I don't know how to accept the wildcard address when it's provided in RCPT TO: Everything I've tried so far winds up with a 550 user unknown error. So here's a specific example: I want to be able to handle any address that consists of a series of digits followed by a dot followed by a word, then pipe that to a script. If the headers get rewritten, that's OK, but I need the envelope to contain the actual Delivered-To address. Here's the sort of SMTP session I need: 220 blah.foo.com ESMTP server ready; Thu, 22 Apr 2010 20:41:08 -0700 (PDT) HELO blort.foo.com 250 blah.foo.com Hello blort.foo.com [10.1.2.3], pleased to meet you MAIL FROM: <[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO: <[email protected]> 250 2.1.5 <[email protected]>... Recipient ok I tried some stuff with regex maps, but I never got past 550 user unknown.

    Read the article

  • Share Exchange Calendar Outside Organization

    - by CalCurious
    I'm trying to figure out the best way to meet a user's (Corp-A-User) request to share their calendar with someone at another company (Corp-B-User). We're running Microsoft SBS 2008 with Exchange 2007 and SharePoint. The remote user is running Exchange, version unknown. Corp-A-User wants to give the Corp-B-User the ability to create appointments on Corp-A-User's calendar. This will naturally require sharing of Free/Busy information. Corp-A-User naturally lacks the vision to seen ANY problem with giving Corp-B-User full access to their calendar. But, I see the problems with that and would prefer that Corp-B-User have only the ability to see Free/Busy and create appointments. Most of the external publishing options that I have thought of, such as WebDav, allow displaying a user's calendar, but there are problems with security and the ability to create appointments. Right now, I'm thinking the cleanest solution would be to use a Google calendar along with Google Calendar Sync for the two user's Outlook clients. But, I'm not sure if there isn't a better way and I hate teh idea of pushing a corporate calendar up to Google. Not to mention the issues likely to pop up from the multiple sync paths. Does any one have a good solution for this scenario that would be willing to share what they use?

    Read the article

  • Wireless does not work on Ubuntu 9.04

    - by Yongwei Xing
    Hi all I install the Ubuntu 9.04 my old Lenovo Y520 laptop, the wirless does not work.My Wireless card is Intel Pro/wireless 2100 card. But I can not enable it. My wired card is working well. Does anyone meet it before. the ifconfig output is eth0 Link encap:Ethernet HWaddr 00:0a:e4:5f:6c:30 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:973 errors:0 dropped:0 overruns:0 frame:0 TX packets:1025 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:574701 (574.7 KB) TX bytes:169249 (169.2 KB) Interrupt:10 eth1 Link encap:Ethernet HWaddr 00:0c:f1:58:79:b5 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:10 Base address:0x8000 Memory:d0202000-d0202fff lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 B) TX bytes:480 (480.0 B) the output of iwconfig is eth1 unassociated ESSID:off/any Nickname:"ipw2100" Mode:Managed Channel=0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:off Retry short limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I have another question. When my OS is 9.04, there is a icon about network connection on the panel at the top. After I upgraded to 9.10, that icon disappeared. How can I get that back? Best Regareds,

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >