Search Results

Search found 11268 results on 451 pages for 'shweta simply'.

Page 184/451 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • cannot boot ubuntu 13.10 with my usb, Can i change the kernal on my laptop to run it?

    - by Carlos Dunick
    Currently i am running 12.04 and looking for an upgrade to 13.10 I first tried a bootable 64bit usb and failed. With the message saying "Kernal requires an x86-64 CPU but only detected an I686 CPU Unable to boot please use a kernal appropriate for your CPU" then tried 32bit and same message came up. Is this due to my laptop simply being to slow? or can/should i change the kernal somehow? Acer Aspire 5710z Intel Pentium dual core processor, 1.73Ghz, 533 MHz FSB, 1 MB L2 cache. 2GB DDR2 lspci 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) 00:1c.2 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 3 (rev 02) 00:1c.3 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 02) 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 02) 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 02) 04:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit Ethernet PCI Express (rev 02) 05:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) 06:00.0 FLASH memory: ENE Technology Inc ENE PCI Memory Stick Card Reader Controller 06:00.1 SD Host controller: ENE Technology Inc ENE PCI SmartMedia / xD Card Reader Controller 06:00.2 FLASH memory: ENE Technology Inc Memory Stick Card Reader Controller 06:00.3 FLASH memory: ENE Technology Inc ENE PCI Secure Digital / MMC Card Reader Controller

    Read the article

  • At the end of my rope

    - by hvgotcodes
    I am a contractor to a big company. Currently, there are three developers on the project, myself included. The problem is the other 2 developers don't really get it. By "it" i mean the following: They don't understand the best practices for the technology we are using. After 6 months of me and others giving them examples there are terrible anti-patterns being used. They are "copy and paste" programmers that produce primarily spaghetti code. They constantly break things, implementing changes but not doing a basic smoke test to see if all is good They refuse/rarely to ask for code-reviews. They refuse/rarely even do basic things like formatting code. No documentation on any classes (jsdocs) Afraid to delete code that doesn't do anything Leave commented code blocks everywhere even though we have version control. I find myself getting more and more frustrated as I format others code, fix bugs, discover functionality that is broken, and create abstractions to remove the spaghetti. I really don't know what to do. I try not to get to frustrated, but it's just a mess. I like these people as people, but I feel like the coding situation is so bad that I could move faster if they simply browsed the web all day. Would it be out of line to ask our manager to review the others svn commit access; commits can only be done after a review by someone who is knowledgeable in what we are doing? As a contractor, I'm not sure if that's the best move. Is there a subtle/not so subtle way of making it clear how many things I am fixing?

    Read the article

  • Single-developer GIT workflow (moving from straightforward FTP)

    - by melat0nin
    I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful.

    Read the article

  • Installing Django on Windows

    - by Pranav
    Ever needed to install Django in a Microsoft Windows environment, here is a quick start guide to make that happen: Read through the official Django installation documentation, it might just save you a world of hut down the road. Download Python for your version of Windows. Install Python, my preference here is to put it into the Program Files folder under a folder named Python<Version> Add your chosen Python installation path into your Windows path environment variable. This is an optional step, however it allows you to just type python in the command line and have it fire up the Python interpreter. An easy way of adding it is going into Control Panel, System and into the Environment Variables section. Download Django, you can either download a compressed file or if you’re comfortable with using version control – check it out from the Django Subversion repository. Create a folder named django under your <Python installation directory>\Lib\site-packages\ folder. Using my example above that would have been C:\Program Files\Python25\Lib\site-packages\. If you chose to download the compressed file, open it and extract the contents of the django folder into your newly created folder. If you’d prefer to check it out from Subversion, the normal check out points are http://code.djangoproject.com/svn/django/trunk/ for the latest development copy or a named release which you’ll find under http://code.djangoproject.com/svn/django/tags/releases/. Done, you now have a working Django installation on Windows. At this point, it’d be pertinent to confirm that everything is working properly, which you can do by following the first Django tutorial. The tutorial will make mention of django-admin.py, which is a utility which offers some basic functionality to get you off the ground. The file is located in the bin folder under your Django installation directory. When you need to use it, you can either type in the full path to it or simply add that file path into your environment variables as well. Hope this helps!

    Read the article

  • Engineered Systems and PCI

    - by Joel Weise
    Oracle has a number of different engineered systems.  These are design to be highly integrated, optimized and secure systems.  The Exadata database engineered system and the Exalogic application engineered system are two good examples.  Often I am asked how these comply with different standards and regulations.  Exalogic is the Oracle engineered system that supports applications and the focus of today's blog.  First, we must recognize that as a collection of hardware and software, we cannot simply state that Exalogic is "compliant" with PCI DSS.  This is because Exalogic must be implemented within the context of one's existing IT infrastructure, the security features of that infrastructure, the governance framework that exists, security policies, operational procedures, and other factors.  What we can say though, is that Exalogic has been designed with various security capabilities that can be utilized to support compliance to PCI DSS as well as other standards and regulations (e.g., NIST and HIPAA).  Given that, Exalogic can be an excellant platform for running PCI related payment applications.  Coalfire Systems, a leading QSA in the US, has evaluated Exalogic against PCI DSS and supports this position.  Their evaluation can be found here: Exalogic and PCI Compliance. I hope you find it useful. 

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • Unable to detect windows hard drive while running Ubuntu 12.04 from USB

    - by eapen jacob
    I am completely new to Ubuntu. I experimented with Ubuntu 12.04 by running it from a USB drive, in-order to recover files from my hard disc. History: My laptop is an IBM R60 running windows 7. Suddenly it gave me an error stating "error 2100 - Hard drive initialization error". I have read all the forums and most of them suggested that I remove and replace my HDD and that did not work. And one site suggested to try using Ubuntu to recover files. I booted my system from USB, and once Ubuntu came up, I choose "Try Ubuntu". It came up fine and I was able to surf ,and do other things, etc. I was unable to to access my files which are on the hard disc and "Attached Devices" is grayed out. 1- Is there any way to gain access to my hard disc to recover the files? How do I navigate to search for my files. 2- Is it just simply not possible if the hard disc themselves are not working? Is that why I`m unable to find the drives. I know its a very novice question, but hoping someone would help me out. Thank you, Eapen

    Read the article

  • Solutions for cheaply replacing poorly-supported onboard ATI card with discreet graphics on desktop machine?

    - by echo-flow
    I have put Ubuntu on my mum's desktop computer. Unfortunately, the open source radeon driver does not work well with the onboard ATI graphics, and ATI's proprietary driver no longer supports the hardware at all. In order to use the ATI proprietary driver with this hardware, it is necessary to use an older version of Xorg, which is now only available in versions of Ubuntu older than 8.10. Unfortunately, the open source radeon driver seems to be causing X to lock up intermittently when my mum uses Audacity. I'm willing to accept that some hardware is not well-supported on Ubuntu, and so, because this is a desktop computer with a couple of free PCI slots, I think a better solution might simply be to plug in a new graphics card that might have better driver support, and to disable the onboard ATI card in the BIOS. The requirements for this card are that it be inexpensive and have robust (preferably open source) driver support in Ubuntu 10.04. Heavy-duty graphics processing power is not a requirement. A second-hand card on Ebay would also be fine. Can anyone make some recommendations?

    Read the article

  • How do I get HDMI output working on a Dell XPS 15 L502x?

    - by Jones
    Recently I've bought my dream's notebook, a Dell XPS 15 but since then this dream became a kind of endless nightmare. I'm almost getting crazy to make my graphic card driver work properly, but it seems to be just impossible. Yes, I have a 2GB NVIDIA GeForce GT 540m (Optimus) in it! It simply doesn't work. Every time I generate the xorg.conf Ubuntu hangs on while starting up, which forces me to remove this file to be able to start the notebook with the standard graphic settings. Another problem is that the Dell XPS 15 does NOT have a VGA output, but a HDMI. So, to be able to use a second monitor I have to configure it by the NVIDIA X Server Settings, which just works if the driver is properly initialized with the xorg.conf. I've also tried to make it work with the Bumblebee, but unfortunately it didn't help me much with the HDMI output. Do you guys have any idea to solve this deadlock? Is there any way for me to use my second monitor?

    Read the article

  • How To Figure Out Your PC’s Host Name From the Command Prompt

    - by The Geek
    If you’re doing any work with networking, you probably need to know the name of your computer. Rather than diving into Control Panel, there’s a really simple way to do this from the command prompt. Note: If you haven’t already, be sure to read our complete guide to networking Windows 7 with XP and Vista. To see the hostname… all you have to do is type hostname at the command prompt. Go figure, eh? The same thing works in Linux or OS X, though you can see that most of the time the hostname is part of the prompt anyway. Note: you can also change the hostname by simply typing “hostname <newhostname>”. Of course, the easiest way to see your computer name in Windows is to just hit the Win+Break key combination, which will pop up the System pane from Control Panel.   If you want to change it instead, you can always change your computer name easily through Control Panel. Similar Articles Productive Geek Tips MySql: Give Root User Logon Permission From Any HostUse "Command Prompt Here" in Windows VistaKeyboard Ninja: Scrolling the Windows Command Prompt With Only the KeyboardVerify the Integrity of Windows Vista System FilesFind Path of Application Running on Solaris, Ubuntu, Suse or Redhat Linux TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • Rythmbox in Ubuntu 12.04 and Ipod touch

    - by leousa
    I don't know if anyone else is experiencing something similar to this. I have an Ipod touch bought 3.5 years ago. It received the last software update a year ago. After that update I had no problems syncing it in Ubuntu 11.04 and 11.10 (two different laptops). Transfer of music albums was flawless with banshee and it would even convert files to the right format automatically. Now, the same ipod touch, without any further software or firmware update does not work in Ubuntu 12.04 rythmbox. The device is mounted and recognized there, but when you drag/drop an album towards the device it does nothing. Before you guys tell me to do it, yes, I have tried with Banshee and gtkpod. The first also brings up an error message saying that the ipod does not support mp3 files, and gtkpod simply crashes all the time. The result is the same. What is going on here? Why a device that worked before does not work now in 12.04? I purchased some music in the Ubuntuone store, and would love to have it transferred to my ipod. Please no links to outdated online manuals with older versions of rythmbox or banshee (I've read them all). And again, nothing wrong with the ipod as it worked in 11.04 and 11.10 and it has not been updated since then. I would strongly appreciate any help provided. thank you

    Read the article

  • Oracle Solaris 11.1 available today

    - by user12611852
    Today Oracle is pleased to announce availability of Oracle Solaris 11.1. Download Solaris 11.1 Order Solaris 11.1 media kitExisting customers can quickly and simply update using the network based repository Highlights include: 8x faster database startup and shutdown and online resizing of the database SGA with a new optimized shared memory interface between the database and Oracle Solaris 11.1 Up to 20% throughput increases for Oracle Real Application Clusters by offloading lock management into the Oracle Solaris kernel Expanded support for Software Defined Networks (SDN) with Edge Virtual Bridging enhancements to maximize network resource utilization and manage bandwidth in cloud environments 4x faster Solaris Zone updates with parallel operations shorten maintenance windows New built-in memory predictor monitors application memory use and provides optimized memory page sizes and resource location to speed overall application performance. Learn more and share these valuable tools with your customers to enable them to move to Oracle Solaris 11.1 quickly. Many customers wait for the first update --now is the time to encourage them to install Oracle Solaris 11.1. Oracle Solaris 11.1 Data Sheet  What's New in Oracle Solaris 11.1 Oracle Solaris 11.1 FAQs Oracle Solaris 11 .1 Customer Presentation Oracle Solaris 11.1 is recommended for all SPARC T4 Systems and will soon be available preinstalled.

    Read the article

  • The Increasing Focus on Architecture

    - by Bob Rhubart
    If you follow my updates on Twitter or on the OTN ArchBeat page on Facebook you have probably noticed that I'm a regular reader of Joe McKendrick's SOA blog on ZDNet. Usually I'm content to simply share a link on my social networks when I find one of McKendrick's posts interesting. But with a recent post, In the cloud era, let's start calling IT what it is: 'Innovation Team', McKendrick hit on a point that warrants more than a quick link: "IT is no longer just a department full of people who code, build and maintain systems. IT is the business partner that plans and strategizes what types of technology solutions the business needs to move forward." Of course, what McKendrick is describing is an increased focus on architecture. Assuming that McKendrick's assessment is correct — and I do — that expanding focus, from coding, building, and maintaining systems to planning and strategizing technology solutions that serve the business, isn't limited to the organizational level. The individual roles within the IT organization will also have to shift to a more broadly architectural mindset. McKendrick's post references Dr. Irving Wladawsky-Berger's assessment of cloud computing as a critical "third model" of computing to emerge in the 50-year history of Information Technology. As computing itself evolves, the underlying roles that make computing possible must evolve accordingly. That evolution will be defined by an increased focus on architecture.

    Read the article

  • Using ConcurrentQueue for thread-safe Performance Bookkeeping.

    - by Strenium
    Just a small tidbit that's sprung up today. I had to book-keep and emit diagnostics for the average thread performance in a highly-threaded code over a period of last X number of calls and no more. Need of the day: a thread-safe, self-managing stats container. Since .NET 4.0 introduced new thread-safe 'Collections.Concurrent' objects and I've been using them frequently - the one in particular seemed like a good fit for storing each threads' performance data - ConcurrentQueue. But I wanted to store only the most recent X# of calls and since the ConcurrentQueue currently does not support size constraint I had to come up with my own generic version which attempts to restrict usage to numeric types only: unfortunately there is no IArithmetic-like interface which constrains to only numeric types – so the constraints here here aren't as elegant as they could be. (Note the use of the Average() method, of course you can use others as well as make your own).   FIFO FixedSizedConcurrentQueue using System;using System.Collections.Concurrent;using System.Linq; namespace xxxxx.Data.Infrastructure{    [Serializable]    public class FixedSizedConcurrentQueue<T> where T : struct, IConvertible, IComparable<T>    {        private FixedSizedConcurrentQueue() { }         public FixedSizedConcurrentQueue(ConcurrentQueue<T> queue)        {            _queue = queue;        }         ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();         public int Size { get { return _queue.Count; } }        public double Average { get { return _queue.Average(arg => Convert.ToInt32(arg)); } }         public int Limit { get; set; }        public void Enqueue(T obj)        {            _queue.Enqueue(obj);            lock (this)            {                T @out;                while (_queue.Count > Limit) _queue.TryDequeue(out @out);            }        }    } }   The usage case is straight-forward, in this case I’m using a FIFO queue of maximum size of 200 to store doubles to which I simply Enqueue() the calculated rates: Usage var RateQueue = new FixedSizedConcurrentQueue<double>(new ConcurrentQueue<double>()) { Limit = 200 }; /* greater size == longer history */   That’s about it. Happy coding!

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

  • Should I use subdomains or subfolders for my user groups?

    - by bilygates
    Hello, I run a photography website where each user has its own subdomain (i.e. user.site.com). I'm thinking of adding user groups but I'm unable to decide if I should also associate a separate subdomain or simply a subfolder for each group: Subfolders (www.site.com/groups/my-group) Pros: Easier to maintain from a tehnical p.o.v. Cons: Harder to memorize. The URLs can get really long (www.site.com/groups/my-group/albums/my-album/) Subdomains (my-group.site.com) Pros: Easier to memorize. Shorter URLs. One might have the impression that such an URL is somewhat more "independent" from the main site. Cons: Group and user names belong to the same name space, so we need to check for collisions when creating a new user/group. One cannot determine the content of the page by only reading the URL: Is x.site.com a user page or a group page? What's your opinion on the matter? I should note that DeviantArt.com uses the 2nd option (that's where I got the idea). Thank you in advance!

    Read the article

  • Getting into the details of game engine programming

    - by Darkslash
    I am interested in learning game programming, but I really have an interest in the lower level engineering in games. I have OpenGL experience, and I am really interested in learning more about implementing AI, Physics, etc. I have a computer science degree, so I really like getting into technical stuff. Many times when I ask about this sort of thing, I get a lot of "Use an engine", "Use Unity3d", "Why waste your time writing code that already exists", etc, etc. My idea was to use simpler libraries such as SFML or XNA so that I could learn how to implement the more complex systems. The thing is, although I do want to write games, I want to learn things that using something like Unity simply doesn't teach you. My goal is not to make a current generation quality 3D game to sell, I just want to make some cool smaller games and learn all I can about the programming side of game development. Is this something that people just do not do anymore? It seems like everywhere I turn people are using Unity or UDK or GameMaker. I fully understand why you would use a tool like these, but I cant see how they would suit my purposes. So where does someone like myself turn? Am I trying to learn something that people just do not bother doing anymore? Is the innovation in this area gone and just all about gameplay now? I'm sorry if this question seems silly, but I am genuinely interested in knowing more about this and meeting more people who are interested in this sort of thing.

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • ISO 12207 - testing being only validation activity? [closed]

    - by user970696
    Possible Duplicate: How come verification does not include actual testing? ISO norm 12207 states that testing is only validation activity, while all static inspections are verification (that requirement, code.. is complete, correct..). I did found some articles saying its not correct but you know, it is not "official". I would like to understand because there are two different concepts (in books & articles): 1) Verification is all testing except for UAT (because only user can really validate the use). E.g. here OR 2) Verification is everything but testing. All testing is validation. E.g. here Definitions are mostly the same, as Sommerville's: The aim of verification is to check that the software meets its stated functional and non-functional requirements. Validation, however, is a more general process. The aim of validation is to ensure that the software meets the customer’s expectations. It goes beyond simply checking conformance with the specification to demonstrating that the software does what the customer expects it to do It is really bugging me because I tend to agree that functional testing done on a product (SIT) is still verification because I just follow the requirements. But ISO does not agree..

    Read the article

  • How can I check myself when I'm the only one working on a project?

    - by Ricardo Altamirano
    I'm in between jobs in my field (unrelated to software development), and I recently picked up a temporary side contract writing a few applications for a firm. I'm the only person working on these specific applications. Are there ways I should be checking myself to make sure my applications are sound? I test my code, try to think of edge cases, generate sample data, use source control, etc. but since I'm the only person working on these applications, I'm worried I'll miss bugs that would easily be found in a team environment. Once I finish the application, either when I'm happy with it or when my deadline expires, the firm plans to use it in production. Any advice? Not to use a cliche, but as of now, I simply work "to the best of my ability" and hope that it's enough. Incidentally, I'm under both strict NDA's and laws about classified material, so I don't discuss the applications with friends who have actually worked in software development. (In case it's not obvious, I am not a software developer by trade, and even my experience with other aspects of information technology/computer science are limited and restrained to dabbling for the most part).

    Read the article

  • Have there been attempts to make object containers that search for valid programs by auto wiring compatible components?

    - by Aaron Anodide
    I hope this post isn't too "Fringe" - I'm sure someone will just kill it if it is :) Three things made me want to reach out about this now: Decoupling is so in the forefront of design. TDD inspires the idea that it doesn't matter how a program comes to exist as long as it works. Seeing how often the adapter pattern is applied to achieve (1). I'm almost sure this has been tried from a memory of reading about it around the year 2000 or so. If I had to guess, it was maybe about and earlier version of the Java Spring framework. At this time we were not so far from days when the belief was that computer programs could exhibit useful emergent behavior. I think the article said it didn't work, but it didn't say it was impossible. I wonder if since then it has been deemed impossible or simply an illusion due to a false assumption of similarity between a brain and a CPU. I know this illusion existed because I had an internship in 1996 where I programmed neural nets that were supposedly going to exhibit "brain damage". STILL, after all that, I'm sitting around this morning and not able to shake the idea that it should be possible to have a method of programming to allow autonomous components to find each other, attempt to collaborate and their outputs evaluated against a set desired results.

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • Interesting fault attempting to install Ubuntu 12.0.4.3 OR 13.10 on MSI GS70 2OD-001US

    - by cjaredrun
    Attempting to install Ubuntu on an MSI GS70 2OD-001US notebook pre-installed with Windows 8. For the record this is what I have attempted: All cases were attempted with Ubuntu 12.0.4.3 AND 13.10 images via USB drives Case 1 - Attempt no changes and simply boot from USB Result: It seems to want to work initially. The Ubuntu logo pops up and the dots start moving. After a few seconds the dots freeze and this screen is shown: http://i.imgur.com/C7pyMRk.jpg Case 2 - Start playing with BIOS - In Windows turning off fast boot - In BIOS turning off Secure boot - In BIOS turning off UEFI and selecting LEGACY Result: Brought to a black screen with a - blinking at me. Case 3 - Alternating BIOS settings - In Windows turning off fast boot - In BIOS turning off Secure boot - in BIOS turning off UEFI and selecting UEFI with CSM Result: Brought to a black screen no other indication of change. It's pretty frustrating to feel locked out of a laptop that I have paid good money for, I'm sure I am not alone in that case. It does however appear that there has been some success, so I am hopeful that someone might be able to help me. FYI: Dual booting is not necessary for me, if that helps at all... I'm not sure if this is a reasonable option, but if I completely wipe clean the hdds, no particle of Windows at all, would this still be a problem? Also would opening up the laptop and replacing the HDDS with brand new ones be a solution as well? Thanks for any input or suggestions.

    Read the article

  • Encapsulating code in F# (Part 2)

    - by MarkPearl
    In part one of this series I showed an example of encapsulation within a local definition. This is useful to know so that you are aware of the scope of value holders etc. but what I am more interested in is encapsulation with regards to generating useful F# code libraries in .Net, this is done by using Namespaces and Modules. Lets have a look at some C# code first… using System; namespace EncapsulationNS { public class EncapsulationCLS { public static void TestMethod() { Console.WriteLine("Hello"); } } } Pretty simple stuff… now the F# equivalent…. namespace EncapsulationNS module EncapsulationMDL = let TestFunction = System.Console.WriteLine("Hello") ()   Even easier… lets look at some specifics about F# namespaces… Namespaces are open. meaning you can have multiple source files and assemblies can contribute to the same namespace. So, Namespaces are a great way to group modules together, so the question needs to be asked, what role do modules play. For me, the F# module is in many ways similar to the vb6 days of modules. In vb6 modules were separate files and simply allowed us to group certain methods together. I find it easier to visualize F# modules this way than to compare them to the C# classes. However that being said one is not restricted to one module per file – there is flexibility to have multiple modules in one code file however with my limited F# experience I would still recommend using the file as the standard level of separating modules as it is very easy to then find your way around a solution. An important note about interop with F# and other .Net languages. I wrote a blog post a while back about a very basic F# to C# interop. If I were to reference an F# library in a C# project (for instance ‘TestFunction’), in C# it would show this method as a static method call, meaning I would not have to instantiate an instance of the module.

    Read the article

  • How can I get a gnome environment in my VNC session?

    - by adante
    When I start VNC I have an empty desktop without the ability to manage windows or start apps etc). I'd like to have a desktop environment to be able to basic desktop things (someone asked me why I wanted this - I can't really say except that I would like my computer to be useful). My focus at the moment is basically having a working environment with as little time/effort expenditure as possible, as opposed to spending a full-time week learning the most trivial and arcane details of x, vnc, gnome or whatever passes for the current desktop architecture standard of the hour. What command or series of hoops do I have to jump to to achieve this? I have tried running gnome-session but it looks like it is attempting to run compiz and fails spectacularly. I've also tried running metacity but this simply gives me a titlebars to my windows (this is great! But I'd also like the taskbar and other stuff). I considered trying to start gnome-session in a way that it uses metacity instead of compiz. But I don't know how to do this. Tutorials on the net exist for changing to metacity - once you already have compiz running. Not so useful if compiz does not run.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >