Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 855/1039 | < Previous Page | 851 852 853 854 855 856 857 858 859 860 861 862  | Next Page >

  • Multiple Windows Desktop areas for Full-Screen applications

    - by arootbeer
    Is it possible to run multiple instances of Windows Explorer within a single user session, or configure multiple desktops that are portions of a screen? I don't know the best way to describe what I want to achieve, but here's a picture of what I've got: I've got a 4 monitor setup, 3 portrait and one landscape, and I am normally running a number of RDP sessions, outlook, chrome, a development environment or two, so on and so forth. Most of these applications support full-screen views which mostly or completely hide the window borders, but on the Windows Desktop they take up a full monitor to do so. What I want to do is have 7 "desktops", "regions", call them what you will, each of which is, for the purposes of applications running in it, a "full screen" environment: I'm not tied to Windows Explorer for this, in case it helps - a different window manager that will support this functionality would be a perfectly acceptable answer.

    Read the article

  • How to persuade C fanatics to work on my C++ open source project?

    - by paperjam
    I am launching an open-source project into a space where a lot of the development is still done Linux-kernel-style, i.e. C-language with a low-level mindset. There are multiple benefits to C++ in our space but I fear those used to working in C will be scared off. How can I make the case for the benefits of C++? Specifically, the following C++ attributes are very valuable: concept of objects and reference-counting pointers - really don't want to have to malloc(sizeof(X)) or memcpy() structs templates for specialising whole bodies of code with specific performance optimizations and for avoiding duplication of code. template metaprogramming related to the above syntactic sweetness available (e.g. operator overloading, to be used in very small doses) STL Boost libraries Many of the knee-jerk negative reactions to C++ are illfounded. Performance does not suffer: modern compilers can flatten dozens of call stack levels and avoid bloat through wide use of template specializations. Granted, when using metaprogramming and building multiple specializations of a large call tree, compile time is slower but there are ways to mitigate this. How can I sell C++?

    Read the article

  • Is there an open source version check library and web app?

    - by user52485
    I'm a developer for a cross platform (Win, MacOS, Linux) open source C++ application. I would like to have the program occasionally check for the latest version from our web site. Between the security, privacy, and cross platform network issues, I'd rather not roll our own solution. It seems like this is a common enough thing that there 'ought' to be a library/app which will do this. Unfortunately, the searches I've tried come up empty. Ideally, the web app would track requests and process the logs into some nice reports (number of users, what version, what platform, frequency of use, maybe even geographical info from IP address, etc.). While appropriately respecting privacy, etc. What pre-existing tools can help solve this problem? Edits: I am looking for a reporting tool, not a dependency checker. Our project has the challenge of keeping up with our users. Most do not join the mailing list. Our project has not been picked up by major distributions -- most of our users are Windows/MacOS anyway. When a new version comes out, we have no way of informing our users of its existence. Development is moving pretty fast, major features added every few months. We would like to provide the user with a way to check for an updated version. While we're at it, we would like to use these requests for some simple & anonymous usage tracking (X users running version Y with Z frequency, etc.). We do not need/want something that auto-updates or tracks dependencies on the system. We are not currently worried about update size -- when the user chooses to update, we expect them to download the complete latest version. We would like to keep this as simple as possible.

    Read the article

  • Meet the Spec Leads & Active JSRs

    - by heathervc
    For your Monday reading pleasure, the JCP has published Spec Lead Profiles of In Progress/Active JSRs--there are 35 of these Spec Leads!  Find out more about these dedicated community leaders.  In preparing these profiles, the PMO also asked Specification Leads to tell about their experiences  as Spec Leads.  There were many themes that emerged around transparency, openness, agility and participation.  This led to a related article for those interested in learning about the experience of participating in the development of a Java Specification through the JCP program, see: "Active Specification Leads Offer Best Practices and Tips for Success". In Progress/Active JSRs were also reported on in the PMO Presentation during the last JCP EC Face-to-Face meeting in September 2012.   Now is a good time to start thinking about nominations for Star Spec Leads.  Nominations for 2012 are now open.  Anyone can submit a nomination for Star Spec Lead; however, we ask that you nominate an active JSR Spec Lead, operating a JSR under JCP program version 2.8 (introduced October 2011) or above.  Nominations close 31 December 2012.

    Read the article

  • Do you use your personal laptop for work? [closed]

    - by davekaro
    We're trying to get our company to let us use our own personal laptops for client work. We've agreed that any code/data will be encrypted using something like TrueCrypt, in case a laptop is stolen or lost. However, the company is still skeptical and not sure they want to allow us to use our personal machines for development. They would rather buy us laptops... but we want to use MacBook Pros and they don't want to pay for them. Even if they did buy us laptops, we would stil have the issue of needing to encrypt the code/data in case of theft/loss. Do you use your own laptop for work? What are the arguments for/against this? UPDATE: Thanks for all the responses, its given us a lot to think about. This was originally brought up because we were asking for a "personal loan" to buy new laptops for ourselves, and then we threw in there that we would use these laptops for work too - since right now we use our personal laptops occasionally, e.g. at client site or weekend support.

    Read the article

  • How come I cannot make this file executable (chmod permissions)?

    - by bappi48
    I downloaded Android Development Tool for linux (ADT) and placed it in home directory. After unzipping the files, when I double click the "eclipse" executable file; the eclipse works perfectly fine. But If I unzip the ADT in a different directory, in my case directory E: (is shown when I boot in windows 7) There double clicking the same "eclipse" executable file does not run eclipse. It shows error message: Could not display /media/Software/00.AndroidLinux/ADT/eclipse/eclipse. There is no application installed for executable files. Do you want to search for an application to open this file? If I press yes in the Dialog, it finds "Pypar2" which is not my solution. I found that the "eclipse" file permission is following -rw------- 1 tanvir tanvir 63050 Feb 4 19:05 eclipse I tried to change the permission by "chmod +x eclipse" , but no use. This command does not change the file permission at all in this case. what should I do? Relevant output of cat /proc/mounts: /dev/sda6 /media/Software fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 Please not that I'm new to Ubuntu and still learning day by day.

    Read the article

  • Windows user trying install Git on Solaris

    - by nahab
    Is there simply way to install Git on Solaris as on Windows without installing any side libraries and compiling source files? And if not, why? UPD. Yes I'm looking for single package that will be easy to install. We have ~8 solaris zones using for development those we need simple way to install git fast on they. Installation should be easy because each member of team possibly will be do it and it should be fast because of big count of zones.

    Read the article

  • How to rewrite index.php (and other valid default files) to the document root using mod_rewrite?

    - by TMG
    Hello, I would like to redirect index.php, as well as any other valid default file (e.g. index.html, index.asp, etc.) to the document root (which contains index.php) with something like this: RewriteRule ^index\.(php|htm|html|asp|cfm|shtml|shtm)/?$ / [NC,L] However, this is of course giving me an infinite redirect loop. What's the right way to do this? If possible, I'd like to have this work in both the development and production environment, so I don't want to specify an explicit url like http://www.mysite.com/ as the target. Thanks!

    Read the article

  • How should a small team using multiple OS's deploy over github?

    - by Toby
    We have a small development team that have recently moved to using github to host our projects. The team consists of three developers, 2 on Windows and 1 on Mac. I am currently researching the best way to deploy applications to our Linux servers (dev and production). Capistrano running locally would be ideal but from what I read this won't work for Windows machines. It looks like the best way is to use a post-receive hook in github, I can see how this would work for auto deploying to dev, but I don't see how we could then deploy to live. I have found paid projects like http://www.deployhq.com/ but it feels like something that a quick bit of code should be able to do for free, I just can't seem to get myself pointed in the right direction! I was wondering what would be considered best practice for small team deployment involving multiple local OS's and github.

    Read the article

  • How to encourage domain experts familiar only with C into a C++ opensource project [closed]

    - by paperjam
    Possible Duplicate: How to persuade C fanatics to work on my C++ open source project? I am launching an open-source project into a space where a lot of development is done Linux-kernel-style, i.e. C-language with a low-level mindset. My project is broad and complex and uses aspects of the C++ language and libraries, including the Boost library to best effect for simple, slightly syntactically sweetened, elegant and well structured high level code. We are using C++ templates too to avoid duplication of code and for static polymorphism in code specialisation for performance. Many of the experts in this field are well used to pure C-language projects. How can I persuade them to contribute to my idiomatic C++ based project? I have no objection to C-language subcomponents or the use of a C-like subset for parts of the project so that might be part of the answer. This is a rewritten and retagged rehash of my previous question that was closed. Apologies to those who read and answered for it not being constructive. I hope this new question is viewed as constructive. Please note that this is not a language advocacy question and please keep answers in that spirit.

    Read the article

  • Windows 7 equivalent of "Add-WindowsFeature"

    - by L.Moser
    I'm wanting to script the "Turn Windows Features On of Off" functionality for my development group so that we'll have a means of ensuring that everyone is running on the same configurations. We are running Windows 7. Is this possible without DISM.exe? It doesn't necessarily have to be scripting. Windows Features is just one of serveral configurations that developers are responsible for modifying personally. It would also be nice to ensure (for example) that IIS and certain services are configured properly on a given developer's machine. If there's a larger scale tool that could give us this functionality, I would be interested in that too.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • SharePoint Session Management - which SQL Server option?

    - by frumious
    We're developing some custom web parts for our WSS 3 intranet, and have just run into something we'd like to use ASP.NET sessions for. This isn't currently enabled on the development server. We'd like to use SQL Server as the storage mechanism, because the production environment is a web farm with very simple load-balancing. There are 3 options you can choose from to set up the SQL Server session storage, tempdb, default separate DB, named DB. Both tempdb and default separate DB create a new DB to store certain information in; tempdb stores the actual session info in tempdb, which doesn't survive a reboot, and default separate DB stores everything in the new DB. Since you've got to create the new DB either way, my question is this: why would you ever choose to store the session info in tempdb? The only thing I can think of is if you'd like to have the ability to wipe the session by rebooting the server, but that seems quite apocalyptic!

    Read the article

  • PostgreSQL, update existing rows with pg_restore

    - by woky
    Hello. I need to sync two PostgreSQL databases (some tables from development db to production db) sometimes. So I came up with this script: [...] pg_dump -a -F tar -t table1 -t table2 -U user1 dbname1 | \ pg_restore -a -U user2 -d dbname2 [...] The problem is that this works just for newly added rows. When I edit non-PK column I get constraint error and row isn't updated. For each dumped row I need to check if it exists in destination database (by PK) and if so delete it before INSERT/COPY. Thanks for your advice. (Previously posted on stackoverflow.com, but IMHO this is better place for this question).

    Read the article

  • Virtualbox PUEL Interpretation

    - by modernzombie
    Sorry if this seems like a lame question but I want to be sure before making a decision. The Virtualbox PUEL license says “Personal Use” requires that you use the Product on the same Host Computer where you installed it yourself and that no more than one client connect to that Host Computer at a time for the purpose of displaying Guest Computers remotely. I take this to mean that if I want to setup a development server (web server) that's only used by me to do my work this falls under personal use. But if I make this server available for clients to connect to the websites to view my progress this is no longer personal use also meaning that using Vbox to run a production web server is also against the license. Again sorry if this is a dumb question but I find it hard to follow the wording used in licenses. I know I could go with OSE but I have not looked into VNC versus RDP yet. Thanks.

    Read the article

  • Why aren't there 8gb RAM modules yet?

    - by user49951
    Why is RAM module development seemingly stuck at the same size for a while now (a couple of years)? I bought 2x2gb modules 2 years ago, and now it's all the same size, with prices even bigger. I want more memory, because I work a lot on my computer and I just need it. What is going on? Hardware/memory progress was being made constantly until these couple of years, and I'm a big computer user for over 15 years. Why isn't here 4gb/8gb modules yet? I would gladly replace my DDR2 motherboard for a DDRX one if it had at least 4gb DDRX modules for a reasonable price. Now we have a situation with very cheap usb drives reaching 64gb size, and a ram modules with pathetic 2gb size. Sounds like some sort of conspiracy.

    Read the article

  • Most efficient RAID configuration with 6 disks?

    - by Bob King
    I have a hand-me-down server that I'm setting up at home and it's got 6 72Gb hard disks (as well as 2 18Gb drives that I'm using for the OS). What is the best way to configure those 6 drives? Should I RAID 5 or 6, or go with something simpler, like mirroring? I'm planning to use it to hold a source control repository, and possibly data for a development SQL server. The machine has a hardware raid controller. It is an old IBM server.

    Read the article

  • Server Sizing Methodology

    - by adbrpc
    Our development environment consist of JBoss 5.0.1 DB Server, SQL Server 2008, Oracle IDM. Hardware is Win 2008 32 bit, 4GB RAM. We have reached stage where our environment can not handle application resulting in JBoss shut down throwing out of memory errors and CPU reaching to 90% usage. I am looking methodology to calculate correct server sizing where I input TPS, max number of concurrent users, max CPU utilization etc.. to give me number of servers, RAM size, number of cores. I am expecting application to grow 10% annually. Load Balancer and Failover should also be taken in account while sizing.

    Read the article

  • Cannot SSH into Virtual Machine

    - by MasterGberry
    I am running a CentOS VM on my desktop that I use for development testing when coding in python. At my school I have a dedicated IP setup for the VM and my desktop so I never seem to have an issue ssh'ing from desktop into VM. I am now at home for winter break and cannot seem to SSH into the VM using the local ip address behind my router, the external IP with port 22 forwarded to my VM, or anything. Strangely enough I can ssh into my production server and then fromt here ssh into the VM, but not from my desktop to the VM directly What should I do to get this to work? Thanks

    Read the article

  • Setting up a copy of a site with IIS 7?

    - by SJaguar13
    I have a site running on IIS with a dyndns.org domain that points to the IP of the Windows 2008 machine hosting it. I need a copy of that site for development purposes. I set up another folder with all the files, and create a new site in IIS. I don't really have a domain for it, so I was just going to use the IP address. When I go to localhost, 127.0.0.1, or the internal IP, I get bad hostname. If I use the IP address on port 80 (the same as the real version of the site), I get 404 not found. If I use a different port so I don't have them both on the same IP with the same port, I get connection timed out. How do I go about setting this up?

    Read the article

  • Map localhost to IP address on Windows XP & Internet Explorer 7+?

    - by roblocop
    I'm trying to map 'localhost' to an IP address elsewhere on the network, say '10.0.1.1' for example. I've tried editing my hosts file, changing the entry from: 127.0.0.1 localhost to 10.0.1.1 localhost with no luck. The closest I've gotten is using DNS spoofing via Charles. Adding a DNS spoof entry mapping the host name 'localhost' to '10.0.1.1' works fine in Firefox, but fails in Internet Explorer, basically showing IE's 404 page. I'm wondering if there's some specific setting or way I can get DNS spoofing to work in IE? The main issue I'm trying to resolve is that our development environment points to 'localhost' and rather than setting the dev env up in a legacy Windows laptop to try and debug, point to a server that has it all setup and I can make the changes remotely.

    Read the article

  • How to make a secure MongoDB server?

    - by Earlz
    Hello, I'm wanting my website to use MongoDB as it's datastore. I've used MongoDB in my development environment with no worries, but I'm worried about security with a public server. My server is a VPS running Arch Linux. The web application will also be running on it, so it only needs to accept connections from localhost. And no other users(by ssh or otherwise) will have direct access to my server. What should I do to secure my instance of MongoDB?

    Read the article

  • DNS issues on my iPhone

    - by mattalexx
    I'm trying to call up "https://m.google.com" on my iPhone on my home WiFi. It's saying Safari "cannot verify server identity" of m.google.com, then when I press Details, it refers to https://m.google.com as "mattserver". "mattserver" is the name of my development server, a Linux box on my home network. This stinks of DNS issues to me. Accessing the unsecure version of that URL ("http://m.google.com") gives me a blank page. What could be going on here? Is there a way to look at the logs of my router somehow?

    Read the article

  • CI - How long is continous?

    - by Andy
    We currently are using CCNet as our continous integration server. Most projects check for changes every 30 seconds (the default) and if needed perform a build (unit tests, stylecop, fxcop, etc). We've gotten quite a few projects now, and the server spends most of its time near 100% cpu utilization. This has alarmed some of the development team, even though the server is responsive and builds are still about the same length of time they've always been. Its been suggested that we lower the check interval to about five minutes. To me that seems too long, and we risk people committing code and then going home for the weekend and now there's a broken build possibly holding up others. In response, the suggestion is that if someone needs to know the results they can force the build. But that seems to defeat the purpose of CI, as I thought it was supposed to be automated. My proposed solution is just to get another build server and split the builds amongst the servers. Am I thinking about this the wrong way, or is there a point where if integration isn't often enough you're not really doing CI anymore?

    Read the article

  • Sharing RAM resources between 2 or more computers

    - by davee44
    I know there was a somewhat similar question before: How to share CPU or RAM? But, let me just specify it a little more... When Microsoft Windows requires more RAM capacity than available it uses a swap-file to temporarily store the data there, this is actually something like a hard-drive-based RAM. This technology is used for many years. Theoretically, it shouldn't be too hard to implement a similar technology that uses the RAM of different computer(s) in the network for temporary data storage. This just requires a software that runs on computers in the network that accepts and returns data from/to the main computer and keep that data in the RAM; plus the operation system of the main computer must have the ability to use computers in the network instead of (or in addition to) the swap-file. I wonder, are there any implementations of this idea? This would allow users to build RAM clusters using all of their home or office computers, that will boost the performance of a single computer for some development/gaming/video tasks, etc.

    Read the article

< Previous Page | 851 852 853 854 855 856 857 858 859 860 861 862  | Next Page >