Search Results

Search found 37989 results on 1520 pages for 'software as a service'.

Page 103/1520 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • help download and install and save and use my wireless stick modem and software

    - by ADAM
    i installed Linux Ubuntu after my win os went totally dead by fatal blue screen. cannot reboot or install form CD ROM or get safe mode i installed Linux Ubuntu i still unable to install my wireless stick usb modem to connect and surf the Internet wireless form my provider. i use wireless network from neighborhood And i find it difficult to download and install and save porg and application from the INTERNET like skype and real player and others. it is different form windows and not that easy to use. i click to download for example and find real player but when click on it it opens as text file only and i cannot use it as it should be used for video and download and record from you tube. how to do it. and is there with Linux Ubuntu similar prog to download and record and save from you tube as in windows? Is there any trick to restart or reboot windows OS from the dead. can i use linux to reboot windows or to import my files from the blue screen stopped windows7? Any help is highly appreciated. thanks please email me to : [email protected]

    Read the article

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

  • Implementation of a Rules Engine in Your Business Applicaitons

    - by enonu
    I'm for an experience driven answer from a few software engineers who have implemented a rules engine in their internal business applications. How has it affected your business in the following ways: Ability to launch and iterate over business driven logic Ability to have "business users" perform the actual modification of those rules rather than developers. Ability to comprehend the business rules in general. Quality of the software releases. More or less bugs from the end-user's POV? Speed of the applications. If you had to do it all over again, what would you do differently? Lastly, I'm looking for a qualification of your answer w/ respect to the architecture. Would you do the same thing if you were deploying to a 1-machine setup vs. your architecture vs. a multi-tier cloud-based distributed architecture using 1000s of machines? How would it be different? Thanks!

    Read the article

  • Red Gate Software announces speaker line up for US SQL in the City tour

    SQL in the City is a free, full day training and networking event for database professionals. After the success of last year’s event, Red Gate has expanded the event to cover six cities from sea to shining sea, including: New York, Austin, San Francisco, Chicago, Boston, and Seattle. Compress live data by 73% Red Gate's SQL Storage Compress reduces the size of live SQL Server databases, saving you disk space and storage costs. Learn more.

    Read the article

  • How to import mass accounts into iKode Newsletter Server?

    - by Brownsithily Smith
    I have sent out emails to my 6.2k subscribers through iKode Newsletter Server. And about 50 to be considered as spam. It is less than 1%. It is amazing! The web based email marketing software of iKode also provides double opt-in subscription form which is effective to target special audience. However, if I want to import mailing list to this software, I need to add the address one by one, which is a waste of time. Does iKode provides mass account import ability? Or just need to upload a mailing list file?

    Read the article

  • How to turn a pdf into a text searchable pdf?

    - by don.joey
    I have a number of scanned documents in pdf and I want to be able to search them. How can I do that? Essentially I have to OCR the pdf and then blend the extracted text back into a new pdf. I have unsuccesfully tried pdfocr (which gives me this issue: https://github.com/gkovacs/pdfocr/issues/7) pdfsandwich (of which the software center says it is a poor package and I should not install it) Is there a software package I am unaware of? Or a script that does this?

    Read the article

  • Tips to Find the Best Keyword Research Software

    Keyword research plays a critical role in search engine optimization. The search engines use some keyword phrases to identify the web pages that are relevant to those words. For instance, if you are selling soft toys, then soft baby toys, buy soft toys, soft play toys are some of the right keywords that can bring good number of traffic to your site.

    Read the article

  • Identifying Service Error in Fedora 16

    - by Cerin
    How do you find the cause of a failed service start in Fedora 16? The new systemctl command in Fedora 16 seems to horribly obscure any useful logging info. [root@host ~]# systemctl start httpd.service Job failed. See system logs and 'systemctl status' for details. [root@host ~]# systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/lib/systemd/system/httpd.service; enabled) Active: failed since Thu, 21 Jun 2012 16:26:56 -0400; 1min 23s ago Process: 2119 ExecStop=/usr/sbin/httpd $OPTIONS -k stop (code=exited, status=0/SUCCESS) Process: 2215 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=1/FAILURE) Main PID: 1062 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/httpd.service So the first command fails...and it tells me to run another command...which simply tells me that the command returned an error code. Where's the actual error? Even more frustrating is nothing seems to have been written to the logs: [root@host ~]# ls -lah /var/log/httpd/ total 8.0K drwx------. 2 root root 4.0K Jun 21 16:19 . drwxr-xr-x. 21 root root 4.0K Jun 20 16:33 .. -rw-r----- 1 root root 0 Jun 21 16:19 modsec_audit.log -rw-r----- 1 root root 0 Jun 21 16:19 modsec_debug.log

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Failed to start up after upgrading software in ubuntu 10.10

    - by Landy
    I asked this question in SuperUser one hour ago, then I know this community so I moved the question here... I've been running Ubuntu 10.10 in a physical x86-64 machine. Today Update Manager reminded me that there are some updates to install and I confirmed the action. I should had read the update list but I didn't. I can only remember there is an update about cups. After the upgrading, Update Manager requires a restart and I confirmed too. But after the restart, the computer can't start up. There are errors in the console. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [xxx]usb 1-8: new high speed USB device using ehci_hcd and address 3 [xxx]usb 2-1: new full speed USB device using ohci_hcd and address 2 [xxx]hub 2-1:1.0: USB hub found [xxx]hub 2-1:1.0: 4 ports detected [xxx]usb 2-1.1: new low speed USB device using ohci_hcd and address 3 Gave up waiting for root device. Common probles: - Boot args (cat /proc/cmdline) - Check rootdelay=(did the system wait long enough) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory ALERT! /dev/sda1 does not exist. Dropping to a shell! BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell(ash) Enter 'help' for a list of built-in commands. (initramfs)[cursor is here] At the moment, I can't input anything in the console. The keyboard doesn't work at all. What's wrong? How can I check boot args or "root=" as suggested? How can I fix this issue? Thanks. =============== PS1: the /dev/sda1 is type ext4 (rw,nosuid,nodev) PS2: the /dev/sda1 can be mounted and accessed successfully under SUSE 11 SP1 x64. PS3: From this link, I think the keyboard doesn't work because the USB driver is not loaded at that time.

    Read the article

  • Working as a software developer in a small town [closed]

    - by James
    I'm thinking of moving back to a small town in the near future. Coming from a large city (Dallas), I'm worried about being able to find work as a developer. I've worked remotely for companies as a contractor before, but would prefer a full time position for health insurance. Has anyone successfully made a good career for themselves while living outside of a major city (the nearest big city will be Minneapolis, about 3 hours away)? If so, how did you do it and what steps could I take between now and then to maximize my chances for success?

    Read the article

  • Diagramming software with API allowing high customisation of shapes and actions

    - by jenson-button-event
    I am after something like Visio or Lucid. A relatively simple charting/diagramming tool, to build tree-like structures from (my) pre-defined nodes (squares), but with a powerful API. Requirements: limit the type of objects allowed to be dropped on the diagram validate a model (e.g. node of type A must precede node of type B; must enter node Title) export a model import a model Our domain is very specific, and its a tool we'd want to offer to some of our power users. The $500 Visio licence isn't really within the business model. I'll put no constraints on framework or deployment (web or desktop) - is there anything out there?

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • Visual Studio 2010 Service Pack 1 Released

    - by krislankford
    The VS 2010 SP 1 release was simultaneous to the release of TFS 2010 SP1 and includes support for the Project Server Integration Feature Pack and updates to .NET Framework 4.0. The complete Visual Studio SP1 list including Test and Lab Manager: http://support.microsoft.com/kb/983509 The release addresses some of the most requested features from customers of Visual Studio 2010 like better help support IntelliTrace support for 64bit and SharePoint Silverlight 4 Tools in the box unit testing support on .NET 3.5 a new performance wizard for Silverlight Another major addition is the announcement of Unlimited Load Testing for Visual Studio 2010 Ultimate with MSDN Subscribers! The benefits of Visual Studio 2010 Load Test Feature Pack and useful links: Improved Overall Software Quality through Early Lifecycle Performance Testing: Lets you stress test your application early and throughout its development lifecycle with realistically modeled simulated load. By integrating performance validations early into your applications, you can ensure that your solution copes with real-world demands and behaves in a predictable manner, effectively increasing overall software quality. Higher Productivity and Reduced TCO with the Ability to Scale without Incremental Costs: Development teams no longer have to purchase Visual Studio Load Test Virtual User Pack 2010. Download the Visual Studio 2010 Load Test Feature Pack Deployment Guide Get started with stress and performance testing with Visual Studio 2010 Ultimate: Quality Solutions Best Practice: Enabling Performance and Stress Testing throughout the Application Lifecycle Hands-On-Lab: Introduction to Load Testing with ASP.NET Profile in Visual Studio 2010 How-Do-I videos: Use ASP.NET Profiler in Load Tests Use Network Emulation in Load Tests VHD/VPC walkthrough: Getting Started with Load and Performance Testing Best Practice guidance: Visual Studio Performance Testing Quick Reference Guide

    Read the article

  • Partner Webcast - Oracle Taleo Cloud Service - 12 Dec 2012

    - by Thanos
    Talent Intelligence is the insight companies need to unlock the power of their most critical asset – their people. CEOs are charged with driving growth, and the one ingredient to growth that’s common across all industries and regions - both in good economic times and in bad – is people. In every economic environment, Talent Intelligence is a company’s biggest lever for driving growth, innovation and customer success. Oracle Taleo Cloud Service provides a comprehensive suite of SaaS products that help companies manage their investment in people by improving their Talent Intelligence. The Oracle Taleo Cloud Service enables enterprises and midsize businesses to recruit top talent, align that talent to key goals, manage performance, develop and compensate top performers, and turn today's best performers into tomorrow's leaders. Join us to find out more about the industry's broadest cloud-based talent management platform. Agenda: Oracle HCM Footprint Taleo value proposition Taleo quick tour Why invest in Taleo resources Demonstrating Taleo Q&A REGISTER NOW Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24 hours prior to start time may not receive confirmation to attend. Duration: 1 hour For any questions please contact us at [email protected]. Visit our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix.

    Read the article

  • Strategy to use two different measurement systems in software

    - by Dennis
    I have an application that needs to accept and output values in both US Custom Units and Metric system. Right now the conversion and input and output is a mess. You can only enter in US system, but you can choose the output to be US or Metric, and the code to do the conversions is everywhere. So I want to organize this and put together some simple rules. So I came up with this: Rules user can enter values in either US or Metric, and User Interface will take care of marking this properly All units internally will be stored as US, since the majority of the system already has most of the data stored like that and depends on this. It shouldn't matter I suppose as long as you don't mix unit. All output will be in US or Metric, depending on user selection/choice/preference. In theory this sounds great and seems like a solution. However, one little problem I came across is this: There is some data stored in code or in the database that already returns data like this: 4 x 13/16" screws, which means "four times screws". I need the to be in either US or Metric. Where exactly do I put the conversion code for doing the conversion for this unit? The above already mixing presentation and data, but the data for the field I need to populate is that whole string. I can certainly split it up into the number 4, the 13/16", and the " x " and the " screws", but the question remains... where do I put the conversion code? Different Locations for Conversion Routines 1) Right now the string is in a class where it's produced. I can put conversion code right into that class and it may be a good solution. Except then, I want to be consistent so I will be putting conversion procedures everywhere in the code at-data-source, or right after reading it from the database. The problem though is I think that my code will have to deal with two systems, all throughout the codebase after this, should I do this. 2) According to the rules, my idea was to put it in the view script, aka last change to modify it before it is shown to the user. And it may be the right thing to do, but then it strikes me it may not always be the best solution. (First, it complicates the view script a tad, second, I need to do more work on the data side to split things up more, or do extra parsing, such as in my case above). 3) Another solution is to do this somewhere in the data prep step before the view, aka somewhere in the middle, before the view, but after the data-source. This strikes me as messy and that could be the reason why my codebase is in such a mess right now. It seems that there is no best solution. What do I do?

    Read the article

  • For Windows XP: How to uninstall Broadcom Bluetooth software that won't disappear

    - by T. Webster
    Hello, my situation is similar to this question for Windows 7, except my OS is Windows XP SP3. I have recently realized I made the mistake of buying a Bluetooth adapter and installing the Broadcom/Widcomm Bluetooth stack driver software. Now that I know that the software is no good, I want to install it (so I can install the Toshiba stack for the Cirago adapter). I've attempted to uninstall all Bluetooth driver software in Device Manager, and I don't see any remnants of any Bluetooth drivers there. I would include pictures here, but I don't have 10 reputation yet, so I'll just use links instead. picasaweb.google.com/lh/photo/2NAos6BP9UigPtJKaYRGyQ?feat=directlink But I do see that the little Bluetooth icon still persists: picasaweb.google.com/lh/photo/kbxgPP8ZLxhjX4ob2YjSew?feat=directlink I don't see anything about Bluetooth, Broadcom, or Widcomm in Add/Remove Programs. I don't see any folder names Broadcom or Widcomm in Program Files folder, either. But I do see that Broadcom does show up in the registry with respect to Bluetooth, as shown here. picasaweb.google.com/lh/photo/hWDo-OzB8ApibzeHA_saxQ?feat=directlink What should I do now to completely wipe this persistent Broadcom Bluetooth software off my computer?

    Read the article

  • Dash doesn't recognize that application is installed

    - by Calixte
    I've installed Geary from the software center, but I can't launch it from the Dash nor did I manage to create an app icon on the Launcher. The Software center has no problem seeing that the app is installed (see screenshot) Running the command "geary" will launch the app. Is there any way I can force the Dash to function properly? Thanks, Calixte PS: Does anyone know what this problem is related to? I tried re-installing the app and restarting the computer but that has no effect at all. I have practically installed nothing on this computer, so I can't imagine what app could interfere with the Dash

    Read the article

  • Good Blog Software

    - by Darren Young
    Hi, Inspired from an earlier question regarding starting a blog, I have decided to start one myself. I only have 4 months commercial experience in C#, but I am hoping to use my blog as a tool for further learning. Maybe such things as researching and writing about a different design pattern each week, a tricky aspect of C# that I don't yet fully understand, etc, etc. My question is, can somebody recommend any good blog sites suited for writing text and code? Is there any that allow the use of code tags or similar for formatting? Thanks,

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >