Search Results

Search found 23901 results on 957 pages for 'deployment process'.

Page 315/957 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Role based access control in Oracle VM using Enterprise Manager 12c

    - by Ronen Kofman
    Enterprise Managers let’s you control any element in the environment and define which users can do what on each element. We will show here an example on how to set up RBAC (Role Base Access Control) for Oracle VM using Enterprise Manager, this will be a very simplified explanation  to help you get going. For more comprehensive explanations please refer to the Enterprise Manager User Guide. OK, first some basic Enterprise Manager terminology: Target – any element in the environment is a target – server, pool, zone, VM etc. Administrators – these are the Enterprise Manager users who can login to the platform. Roles – roles are privilege profiles which could be applied to Administrators. The first step will be to discover the virtual environment and bring it in to Enterprise Manager, this process is simple and can be done in two ways: Work on your Oracle VM manager, set it up until you feel comfortable and then register it in Enterprise Manager Use Enterprise Manager and build it all from there. In both cases we will be able to see the same picture from Oracle VM and from Enterprise Manager, any change made in one will be reflected in the other. Oracle VM Manager: Enterprise Manager: Once you have your virtual environment set up in Enterprise Manager it is time to start associating VMs with users (or Administrators as they are called in Enterprise Manager). Enterprise Manager allows us to connect to multiple different identity services and import users from them but the simplest way to add Administrators is just go to setup->security->Administrators and create new Administrator. The creation wizard will walk you through several stages and allow you to assign role(s) to your newly created Administrator, using roles can really shorten the process if done multiple times. When you get to “Target Privileges” stage, scroll down to the bottom to the “Target Privileges” section. In this section you can add targets (virtual machine in our case) and define the type of privileges you would like to assign to the Administrator which you are creating. In this example I chose one of the VMs and granted full privileges to the newly created Administrator. Administrator creation wizard "Target Privileges": Now when you login as the newly created administrator, you will only see the VM that was assign to you and will be able to have full control over it. That’s it, simple and straight forward, Enterprise Manager offers many more things which I skipped here but the point is that if you need role based access control Enterprise Manager can give it to you in a very easy way. Oh and one more thing, virtualization management in Enterprise Manager has no license cost, sweet.

    Read the article

  • Reminder - Article about SharePoint localization

    - by panjkov
    I already wrote about SharePoint localization – in January I published blog post with links for downloads of Language Interface Packs for SharePoint 2010 for official languages in Bosnia and Herzegovina (BiH). Approximately at same time, I wrote detailed article for web portal www.hardwarebase.net , which is published in April 2012. Title of the article is “Localize your SharePoint servers for BiH languages”, and article explains process of installing SharePoint LIP and using it on Team Site. Full...(read more)

    Read the article

  • apt-get broken + dependencies issue + many software uninstalled

    - by vnc786
    OS=ubuntu 12.04 64bit 3.2.0-29-generic what i did? apt-get purge libre* which i thought will remove LO 3.5 but, which was a big mistake after couple of moments, i realise that its is removing all other software so i did cltl-c which stopped the apt process i restarted my system after that i found that following software were missing apt-get,evince(pdf), cheese etc..here is full list http://pastebin.com/CWHrw10y i managed to install APT deb file through dpkg but now i am not able to do any installation so i just removed /var/lib/dpkg/status and created new but that didnt help apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g Suggested packages: debconf-doc debconf-utils whiptail dialog gnome-utils libterm-readline-gnu-perl libgtk2-perl libnet-ldap-perl libqtgui4-perl libqtcore4-perl apt bzip2 ncompress xz-lzma The following NEW packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g 0 upgraded, 24 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/9,246 kB of archives. After this operation, 29.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y E: Cannot get debconf version. Is debconf installed? debconf: apt-extracttemplates failed: No such file or directory dpkg: regarding .../libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb containing libgcc1, pre-dependency problem: libgcc1 pre-depends on multiarch-support multiarch-support is unpacked, but has never been configured. dpkg: error processing /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb (--unpack): pre-dependency problem - not installing libgcc1 No apport report written because MaxReports is reached already Errors were encountered while processing: /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb E: Internal Error, No file name for libc6 W: Could not perform immediate configuration on 'multiarch-support:amd64'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) E: Sub-process /usr/bin/dpkg returned an error code (1) # apt-get -u dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libgcc1 but it is not installed Depends: tzdata but it is not installed E: Unmet dependencies. Try using -f. i have tried below link Unable to install due to debconf problem How do I resolve unmet dependencies? but no help... thanks

    Read the article

  • Clustering for Mere Mortals (Pt3)

    - by Geoff N. Hiten
    The Controller Now we get to the meat of the matter.  You want a virtual cluster, the first thing you have to do is create your own portable domain.  IStart with a plain vanilla install of Windows 2003 R2 Standard on a semi-default VM. (1 GB RAM, 2 cores, 2 NICs, 128GB dynamically expanding VHD file).  I chose this because it had the smallest disk and memory footprint of any current supported Microsoft Server product.  I created the VM with a single dynamically expanding VHD, one fixed 16 GB VHD, and two NICs.  One NIC is connected to the outside world and the other one is part of an internal-only network.  The first NIC is set up as a DHCP client.  We will get to the other one later. I actually tried this with Windows 2008 R2, but it failed miserably.  Not sure whether it was 2008 R2 or the fact I tried to use cloned VMs in the cluster.  Clustering is one place where NewSID would really come in handy.  Too bad Microsoft bought and buried it. Load and Patch the OS (hence the need for the outside connection).This is a good time to go get dinner.  Maybe a movie too.  There are close to a hundred patches that need to be downloaded and applied.  Avoiding that mess was why I put so much time into trying to get the 2008 R2 version working.  Maybe next time.  Don’t forget to add the extensions for VMLite (or whatever virtualization product you prefer). Set a fixed IP address on the internal-only NIC.  Do not give it a gateway.  Put the same IP address for the NIC and for the DNS Server.  This IP should be in a range that is never available on your public network.  You will need all the addresses in the range available.  See the previous post for the exact settings I used. I chose 10.97.230.1 as the server.  The rest of the 10.97.230 range is what I will use later.  For the curious, those numbers are based on elements of my home address.  Not truly random, but good enough for this project. Do not bridge the network connections.  I never allowed the cluster nodes direct access to any public network. Format the fixed VHD and leave it alone for now. Promote the VM to a Domain Controller.  If you have never done this, don’t worry.  The only meaningful decision is what to call the new domain.  I prefer a bogus name that does not correspond to a real Top-Level Domain (TLD).  .com, .biz., .net, .org  are all TLDs that we know and love.  I chose .test as the TLD since it is descriptive AND it does not exist in the real world.  The domain is called MicroAD.  This gives me MicroAD.Test as my domain. During the promotion process, you will be prompted to install DNS as part of the Domain creation process.  You want to accept this option.  The installer will automatically assign this DNS server as the authoritative owner of the MicroAD.test DNS domain (not to be confused with the MicroAD.test Active Directory domain.) For the rest of the DCPROMO process, just accept the defaults. Now let’s make our IP address management easy.  Add the DHCP Role to the server.  Add the server (10.97.230.1 in this case) as the default gateway to assign to DHCP clients.  Here is where you have to be VERY careful and bind it ONLY to the Internal NIC.  Trust me, your network admin will NOT like an extra DHCP server “helping” out on her network.  Go ahead and create a range of 10-20 IP Addresses in your scope.  You might find other uses for a pocket domain controller <cough> Mirroring </cough> than just for building a cluster.  And Clustering in SQL 2008 and Windows 2008 R2 fully supports DHCP addresses. Now we have three of the five key roles ready.  Two more to go. Next comes file sharing.  Since your cluster node VMs will not have access to any outside, you have to have some way to get files into these VMs.  I simply go to the root of C: and create a “Shared” folder.  I then share it out and grant full control to “Everyone” to both the share and to the underlying NTFS folder.   This will be immensely useful for Service Packs, demo databases, and any other software that isn’t packaged as an ISO that we can mount to the VM. Finally we need to create a block-level multi-connect storage device.  The kind folks at Starwinds Software (http://www.starwindsoftware.com/) graciously gave me a non-expiring demo license for expressly this purpose.  Their iSCSI SAN software lets you create an iSCSI target from nearly any storage medium.  Refreshingly, their product does exactly what they say it does.  Thanks. Remember that 16 GB VHD file?  That is where we are going to carve into our LUNs.  I created an iSCSI folder off the root, just so I can keep everything organized.  I then carved 5 ea. 2 GB iSCSI targets from that folder.  I chose a fixed VHD for performance.  I tried this earlier with a dynamically expanding VHD, but too many layers of abstraction and sparseness combined to make it unusable even for a demo.  Stick with a fixed VHD so there is a one-to-one mapping between abstract and physical storage.  If you read the previous post, you know what I named these iSCSI LUNs and why.  Yes, I do have some left over space.  Always leave yourself room for future growth or options. This gets us up to where we can actually build the nodes and install SQL.  As with most clusters, the real work happens long before the individual nodes get installed and configured.  At least it does if you want the cluster to be a true high-availability platform.

    Read the article

  • Google Drive SDK: Publishing your website on Google Drive

    Google Drive SDK: Publishing your website on Google Drive In this session we'll show you a new feature of the Google Drive SDK: the ability to publish a website from a Google Drive folder. We'll show you a brief demo involving cute animals, but don't worry, no kittens will be harmed in the process! From: GoogleDevelopers Views: 0 3 ratings Time: 03:30:00 More in Science & Technology

    Read the article

  • Subscription service or software to handle a Magazine's PDF

    - by Paolo
    I'm looking for an installable or hosted software (service) to handle the process of public users subscribing to the Magazine and receiving the PDF automatically upon an admin upload the new one. The system will have to: handle the money part (PayPal&Co. are OK) let user buy old issues of the Magazine warn user on subscription expiring, etc. PDF stamping and WordPress integration (user credential sharing, page access of subriscrebed goods, etc) will be a big plus.

    Read the article

  • SQL SERVER What is Spatial Database? Developing with SQL Server Spatial and Deep Dive into Spatial

    What is Spatial Database?A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia)Today [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SEO Blogging For Profit

    SEO blogging!? Who does that? More people than you imagine. So why do so many bloggers incorporate SEO into their WordPress or Blogspot blogs? Why do they invest so much time into the process?

    Read the article

  • Caliber Point Launches Republic, A Platform-Based Multi-Tenant HR Service Delivery Solution

    - by jay.richey
    Caliber Point Business Solutions Ltd., a leading Business Process Outsourcing (BPO) service provider and a Hexaware Technologies subsidiary, today announced the launch of Republic - their multi-tenant HR services delivery solution. Built on the Oracle E-Business Suite Release 12, this ready to use platform will cater to multiple clients under a secure and shared environment. This launch will identify Caliber Point as one of the first BPO service providers in India and one of the few in the world to provide a complete platform-based BPO service. More at http://bit.ly/9yyJDW

    Read the article

  • Poll Results: Foreign Key Constraints

    - by Darren Gosbell
    A few weeks ago I did the following post asking people – if they used foreign key constraints in their star schemas. The poll is still open if you are interested in adding to it, but here is what the chart looks like as of today. (at the bottom of the poll itself there is a link to the live results, unfortunately I cannot link the live results in here as the blogging platform blocks the required javascript)   Interestingly the results are fairly even. Of the 78 respondents, fractionally over half at least aim to start with referential integrity in their star schemas. I did not want to influence the results by sharing my opinion, but my personal preference is to always aim to have foreign key constraints. But at the same time, I am pragmatic about it, I do have projects where for various reasons some constraints are not defined. And I also have other designs that I have inherited, where it would just be too much work to go back and add foreign key constraints. If you are going to implement foreign keys in your star schema, they really need to be there at the start. In fact this poll was was the result of a feature request for BIDSHelper asking for a feature to check for null/missing foreign keys and I am entirely convinced that BIDS is the wrong place for this sort of functionality. BIDS is a design tool, your data needs to be constantly checked for consistency. It's not that I think that it's impossible to get a design working without foreign key constraints, but I like the idea of failing as soon as possible if there is an error and enforcing foreign key constraints lets me "fail early" if there are constancy issues with my data. By far the biggest concern with foreign keys is performance and I suppose I'm curious as to how often people actually measure and quantify this. I worked on a project a number of years ago that had very large data volumes and we did find that foreign key constraints did have a measurable impact, but what we did was to disable the constraints before loading the data, then enabled and checked them afterwards. This saved as time (although not as much as not having constraints at all), but still let us know early in the process if there were any consistency issues. For the people that do not have consistent data, if you have ETL processes that you control that are building your star schema which you also control, then to be blunt you only have yourself to blame. It is the job of the ETL process to make the data consistent. There are techniques for handling situations like missing data as well as  early and late arriving data. Ralph Kimball's book – The Data Warehouse Toolkit goes through some design patterns for handling data consistency. Having foreign key relationships can also help the relational engine to optimize queries as noted in this recent blog post by Boyan Penev

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • Importing PSTs with PowerShell in Exchange 2010 SP1

    Unless you use Red Gate's PST Importer, the import and export of PST files with Exchange 2010 is a complex and error-prone business. Microsoft have acknowledged this in the release of Exchange 2010 SP1, since they have now re-engineered the way that PSTs are handled to try and ease the pain of importing and exporting them, but it is still a matter of using Powershell with cmdlets, rather than a GUI. Jaap Wesselius takes a look at the new process.

    Read the article

  • ADF @ Virtual Developer Day: Oracle Fusion Development;July 10th 2012

    - by JuergenKress
    Virtual Developer Day: Oracle Fusion Development Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staff Oracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications. Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect – this event has something for everyone. Don’t miss this opportunity. Tuesday, July 10, 2012 9:00 a.m. PT. – 1:00 p.m. PT 11:00 a.m. CT – 3:00 p.m. CT 12:00 p.m. ET – 4:00 p.m. ET 1:00 p.m. BRT – 5:00 p.m. BRT Agenda 9:00 a.m. Opening 9:30 a.m. Keynote: Oracle Fusion Development Track 1 Introduction to Fusion Development Track 2 What's New in Fusion Development Track 3 Fusion Development in the Enterprise 10:00 a.m. Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 a.m. Rich Web UI made simple – an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration Sessions abstracts Register online now! for this FREE event WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: OTN Virtual Developer Day,ADF,WebLogic,WebLogic basic,ias upgrade,C2B2,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • White Paper: How the Security Development Lifecycle Helped Improve the Security of the 2007 Microsof

    The 2007 Office system was the first Microsoft Office release to include the standardized Security Development Lifecycle (SDL) process throughout the product development life cycle. Read this paper to understand how the SDL contributed to improving the security of the 2007 Microsoft Office System....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Retrieving system information without WMI

    - by user94481
    I want to write an application where I can fetch system information like CPU-Z (for example) does. I don't want to rely on WMI, because I want to grab stuff like information about the manufacturing process of the GPU (like from a database) and I don't want to maintain this by myself, because that would require too much effort. I already came up with HWiNFO32 SDK but I wonder if there are any (maybe free) alternatives to it?

    Read the article

  • Extending WikiPlex with Scope Augmenters

    - by mhawley
    [In addition to blogging, I am also using Twitter. Follow me: @matthawley] Another extension point with WikiPlex is Scope Augmenters. Scope Augmenters allow you to post process the collection of scopes to further augment, or insert/remove, new scopes prior to being rendered. WikiPlex comes with 3 out-of-the-box Scope Augmenters that it uses for indentation, tables, and lists. For reference, I'll be explaining… (read more)

    Read the article

  • dpkg is locked error even after a reboot

    - by blade19899
    I get a dpkg is locked error even after a reboot??? i can't do any apt-get related commands E: Could not lock /var/lib/apt/lists/lock is not available - open (11:Resource is temporarily unavailable) E: The directory /var/lib/apt/lists/ not lock E: Could not lock /var/lib/dpkg/lock is not available - open (11: Resource is temporarily unavailable) E: Can beheersmap (/var/lib/dpkg/) not lock. Is it in use by another process? (dutch ubuntu above err code has been translated with Google tranlsater)

    Read the article

  • What are some reputable merchant account providers for high risk payment web sites?

    - by GregH
    I am helping to set up an online cigar web site. However, it has become a real pain to take payments online since tobacco is considered a "high-risk" item and nobody will provide a merchant account to process the payments. It looks like there are companies that specialize in high-risk merchant accounts. I was wondering if anybody could recommend a high-risk merchant account and payment processing provider?

    Read the article

  • Problem with apt-get [install, purge, autoremove, upgrade]

    - by Ark
    I am trying to install openvpn using apt-get, but I get an error during the process [As far as I understand the issue is with unattended-updates, which I could not upgrade due to apt-get porblem :/]. I cannot upgrade, install, autoremove or purge anything else [which I need to do since /boot is full. For upgrading I did update before trying].... Trace [on sudo apt-get autoremove]: 0 upgraded, 0 newly installed, 79 to remove and 421 not upgraded. 35 not fully installed or removed. Need to get 24.1 kB of archives. After this operation, 140 MB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com/ubuntu/ oneiric/main unattended-upgrades all 0.73ubuntu1 [24.1 kB] Fetched 24.1 kB in 0s (31.2 kB/s) Preconfiguring packages ... (Reading database ... 287206 files and directories currently installed.) Removing flashplugin-downloader:i386 ... Removing libasound2-plugins:i386 ... Removing libpulse0:i386 ... Removing libsndfile1:i386 ... Removing libvorbisenc2:i386 ... Removing libvorbis0a:i386 ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place (Reading database ... 287174 files and directories currently installed.) Preparing to replace unattended-upgrades 0.73ubuntu1 (using .../unattended-upgrades_0.73ubuntu1_all.deb) ... Checking for running unattended-upgrades: Traceback (most recent call last): File "/usr/share/unattended-upgrades/unattended-upgrade-shutdown", line 27, in <module> import apt_pkg ImportError: No module named apt_pkg invoke-rc.d: initscript unattended-upgrades, action "stop" failed. dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Checking for running unattended-upgrades: Traceback (most recent call last): File "/usr/share/unattended-upgrades/unattended-upgrade-shutdown", line 27, in <module> import apt_pkg ImportError: No module named apt_pkg invoke-rc.d: initscript unattended-upgrades, action "stop" failed. dpkg: error processing /var/cache/apt/archives/unattended-upgrades_0.73ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 update-rc.d: warning: unattended-upgrades start runlevel arguments (2 3 4 5) do not match LSB Default-Start values (none) update-rc.d: warning: unattended-upgrades stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (0 6) Checking for running unattended-upgrades: Traceback (most recent call last): File "/usr/share/unattended-upgrades/unattended-upgrade-shutdown", line 27, in <module> import apt_pkg ImportError: No module named apt_pkg invoke-rc.d: initscript unattended-upgrades, action "start" failed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/unattended-upgrades_0.73ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Maybe I missed something basic, but I would appreciate pointers on solving the issue. Thanks

    Read the article

  • The Making of Middle Earth [Video]

    - by Jason Fitzpatrick
    The Lord of the Rings movie franchise was filmed in stunning New Zealand locations. The Hobbit continues that tradition; check out this mini-documentary to see the scouting process and take a sneak peek at the film. The Making of Middle Earth [via Mashable] How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • getting stuck in command prompt while installing ubuntu 12.4 lts in vmplayer

    - by naveen
    the full installation happens and the vm reboots and comes to a command prompt when i log in i get into the system but again get stuck there when i see tutorial it say after booting a gui will come and i am not prompted to install vmware tools when i select the iso file it says easy install will be done when i manually try to install vm tools it say its not allowed see when easy install is in process

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >