Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 16/387 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Are filesystem operations a function of the kernel?

    - by hydroparadise
    I suppose the question would be OS specific, so I'll take the following scenarios: Winodows (NTFS) OSX (HFS) Linux (ext2,ext3,ext4) Each operating system has it's default filesystem it operates os (OSX, I beleive, only has the one choice available). I've noticed some utilities out there for OS's to read different file systems (which obvisouly is NOT apart of the kernel), which got me thinking: Are filesystem operations a function of a driver (ie, potentially modular), or is it truly apart of the kernel?

    Read the article

  • Oracle Accelerate : Packaged CX Solutions for Growing Companies

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Accelerate is Oracle's approach for providing simple to deploy, packaged, enterprise-class software solutions to growing midsize organizations through its network of expert partners. They come with a fixed price, a fixed scope and can be industry- or country-specific. Here is a suggestion of Oracle Accelerate solutions specially tailored for EMEA based customers looking for growing their business with CX technology: Oracle Sales Cloud Birchman Consulting's Oracle Accelerate Solution for Oracle Sales Cloud CSolutor Oracle Accelerate Solution for Oracle Sales Cloud CapricornVentis Oracle Accelerate Solution for Oracle Sales Cloud Oracle Sales Cloud for vertical industries Enigen’s Oracle Accelerate solution for Oracle Fusion CRM for Professional Services BPI's Oracle Accelerate solution for Oracle Sales Cloud for Business Services Companies BPI's Oracle Accelerate Solution for Oracle Sales Cloud for Insurance Companies BPI's Oracle Accelerate solution for Oracle Sales Cloud for Engineering & Construction Companies BPI's Oracle Accelerate Solution for Oracle Sales Cloud for Telecommunications Companies Fellow Consulting's Oracle Accelerate Solution for Oracle Sales Cloud for Consumer Goods industry Fellow Consulting's Oracle Accelerate Solution for Oracle Sales Cloud for Wholesale Distribution Fellow Consulting's Oracle Accelerate Solution for Oracle Sales Cloud for Life Science industry Oracle Service Cloud (RightNow) CapricornVentis Oracle Accelerate Solution for Oracle RightNow Cloud Service for Retail Industry for Ireland CapricornVentis Oracle Accelerate Solution for Oracle RightNow Cloud Service for Retail Industry for the United Kingdom Enigen’s Oracle Accelerate Solution for Oracle RightNow Service Cloud for the United Kingdom DNASTREAM’s RapidLaunch Oracle Accelerate solution for RightNow Oracle Commerce (ATG) ProgiCommerce - an Oracle Accelerate solution for ATG Commerce delivered by PROGIWEB Spindrift Momentum - an Oracle Accelerate Solution for ATG Commerce for Retail Industry e2x RoadRunner - the ATG Oracle Accelerate solution for Manufacturing Industry e2x RoadRunner - the ATG Oracle Accelerate solution for Telecommunications Industry e2x RoadRunner - the ATG Oracle Accelerate solution for Retail Web Commerce

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • What's the convention for extending Linq with set based helper operations

    - by Luke Rohde
    Hi All I might be vaguing out here but I'm looking for a nice place to put set based helper operations in linq so I can do things like; db.Selections.ClearTemporary() which does something like db.DeleteAllOnSubmit(db.Selections.Where(s => s.Temporary)) Since I can figure out how to extend Table<Selection> the best I can do is create a static method in partial class of Selection (similar to Ruby) but I have to pass in the datacontext like; Selection.ClearTemporary(MyDataContext) This kind of sucks because I have two conventions for doing set based operations and I have to pass the data context to the static class. I've seen other people recommending piling helper methods into a partial of the datacontext like; myDataContext.ClearTemporarySelections(); But I feel this makes the dc a dumping ground for in-cohesive operations. Surely I'm missing something. I hope so. What's the convention? TIA

    Read the article

  • What's the convention for extending Linq datacontext with set based helper operations specific to on

    - by Luke Rohde
    Hi All I might be vaguing out here but I'm looking for a nice place to put set based helper operations in linq so I can do things like; db.Selections.ClearTemporary() which does something like db.DeleteAllOnSubmit(db.Selections.Where(s => s.Temporary)) Since I can figure out how to extend Table<Selection> the best I can do is create a static method in partial class of Selection (similar to Ruby) but I have to pass in the datacontext like; Selection.ClearTemporary(MyDataContext) This kind of sucks because I have two conventions for doing set based operations and I have to pass the data context to the static class. I've seen other people recommending piling helper methods into a partial of the datacontext like; myDataContext.ClearTemporarySelections(); But I feel this makes the dc a dumping ground for in-cohesive operations. Surely I'm missing something. I hope so. What's the convention? TIA

    Read the article

  • Library for polygon operations

    - by AJM
    I've recently encountered a need for a library or set of libraries to handle operations on 2D polygons. I need to be able to perform boolean/clipping operations (difference and union) and triangulation. So far the libraries I've found are poly2tri, CGAL, and GPC. Poly2tri looks good for triangulation but I'm still left with boolean operations, and I'm unsure about its maturity. CGAL and GPC are only free if my own project is free. My particular project isn't commercial, so I'm hesitant to pay or request for any licenses. But I may want to use my code for a future commercial project, so I'm hesitant about CGAL's open source licenses and GPC's freeware-only restriction. There doesn't seem to be any polygon clipping libraries with nice BSD-style licenses.

    Read the article

  • EXPLAIN PLAN FOR in ORACLE

    - by Adnan
    I am making a test. I have all tests in rows, so my rows looks like this; ID | TEST ---------------------------------- 1 | 'select sysdate from dual' 2 | 'select sysdatesss from dual' Now I read it row by row and I need to test it with EXPLAIN PLAN FOR so the for the first row it would be EXPLAIN PLAN FOR select sysdate from dual but I have problem converting the TEST field. Right now I use; EXPLAIN PLAN FOR testing.TEST but it does not work. Any ideas?

    Read the article

  • T-SQL Operations on a Calculated Date Field

    - by firedrawndagger
    Can I do WHERE operations on a calculated date field? I have a lookup field, which has been written badly in SQL and unfortunately I can't change it. But basically it stores dates as characters such as "July-2010" or "June-2009" (along with other non date data). I want to extract the dates first (which I did using a LIKE opertor) and then extract data based on a date range. SELECT BusinessUnit, Lookup, ReleaseDate FROM ( SELECT TOP 10 LookupColumn As Lookup, BU as BusinessUnit, CONVERT(DATETIME, REPLACE(LookupColumn,'-',' ')) as ReleaseDate FROM [dbo].[LookupTable] WHERE LookupColumn LIKE N'%-2010' ) MyTable ORDER BY ReleaseDate WHERE ReleaseDate = '2010-02-01' I'm having issues with WHERE operator. I would assume creating a subquery to encapsulate the calculated field would allow me to do operations with it such as WHERE but maybe I'm wrong. Bottom line is it possible to do operations on calculated fields?

    Read the article

  • Harmonized sales tax headaches

    - by JonYork
    Alright Im using the BambooInvoice software, and where I am, we have two sales taxes. This is how they work price of item * tax1 = Sum1Tax1 Sum1tax1 *tax2 = Final sales price Currently, Bamboo invoice does this Price of Item * tax1 = pricetax1 price of item * tax2 = pricetax2 Price of item + pricetax1 + pricetax2 and this is its code $this->db->select('(SELECT SUM('.$this->db->dbprefix('invoice_items').'.amount * '.$this->db->dbprefix('invoice_items').'.quantity * ('.$this->db->dbprefix('invoices').'.tax1_rate/100 * '.$this->db->dbprefix('invoice_items').'.taxable)) FROM '.$this->db->dbprefix('invoice_items').' WHERE '.$this->db->dbprefix('invoice_items').'.invoice_id=' . $invoice_id . ') AS total_tax1', FALSE); $this->db->select('(SELECT SUM('.$this->db->dbprefix('invoice_items').'.amount * '.$this->db->dbprefix('invoice_items').'.quantity * ('.$this->db->dbprefix('invoices').'.tax2_rate/100 * '.$this->db->dbprefix('invoice_items').'.taxable)) FROM '.$this->db->dbprefix('invoice_items').' WHERE '.$this->db->dbprefix('invoice_items').'.invoice_id=' . $invoice_id . ') AS total_tax2', FALSE); $this->db->select('(SELECT SUM('.$this->db->dbprefix('invoice_items').'.amount * '.$this->db->dbprefix('invoice_items').'.quantity + ROUND(('.$this->db->dbprefix('invoice_items').'.amount * '.$this->db->dbprefix('invoice_items').'.quantity * ('.$this->db->dbprefix('invoices').'.tax1_rate/100 + '.$this->db->dbprefix('invoices').'.tax2_rate/100) * '.$this->db->dbprefix('invoice_items').'.taxable), 2)) FROM '.$this->db->dbprefix('invoice_items').' WHERE '.$this->db->dbprefix('invoice_items').'.invoice_id=' . $invoice_id . ') AS total_with_tax', FALSE); How would we modify this code to reflect the actual taxation scheme for my area? Thanks

    Read the article

  • IT lead does not have a backup, DR plan in writing

    - by Alex
    This is a general management question to IT managers out there. We are a small firm with about 4 servers in our colo cabinent. No full time IT manager. But we do have one person on monthly contract and I am having a terrible time getting him to share what these plans actually are. I am sure he HAS a plan (and its probably in his head..) but that does us no good if he gets hit by a bus.. How would you guys handle this? He is a long time friend, but I fear this is dangerous for us long term..I have confronted him on several occasions about this, and he tells me not to worry, he has go it covered.. Thanks.

    Read the article

  • Social-Networking Startup, Hosting Plan

    - by pws5068
    I've created a social networking community which is soon ready to release, and I'm trying to decide on a type of hosting plan. I have considered options such as VPS and Reseller plans. I anticipate (or hope for at least) a significant amount of traffic/bandwidth in the not-too-distant future. If I open a reseller, will I receive the same amount of server lag during busy hours that I do with a shared account? How significant is the profit margin with the reseller option? Aside from generalized "configurability", what advantages merit purchasing a VPS? Is there anything stopping me from reselling space on a VPS account? Features I need Include: PHP, MySql, Unlimited Domains, Ruby on Rails, Remote Database Connections

    Read the article

  • Developing and implementing a testing plan for a software app deployed on a web server

    - by Abhzoo
    A company in the USA is building a new Web App that will be offered SaaS to customers and the development is being done by a software development team located in a different country(India). They are about to take delivery of a first demo to provide live feedback to the team in India. The overseas team requires a cloud server (Windows + SQL Standard, 8GB Ram, 8 vCPUs, 40GB SSD system disk, 80GB SSD data disk, 1600Mb/s network bandwidth) to serve as a tester server. When the tester is setup the team will install the app on the test server to get live feedback. Q:Explain in detail how you will develop and implement a testing plan for the software App. Be sure to explain the specifics. PLEASE HELP, NEED ANSWER ASAP

    Read the article

  • What is the best plan to handle server fault for google app engine [closed]

    - by lucemia
    I used google-appengine without preparing much backup plans before, but it looks like not a good idea anymore.... Since google app engine is quite hard to find a backup replacement, I plan to just add a "server error" page which will show while server fault. Currently I am thinking to: Use the cdn cloudfare in front of google app engine. It will also handle the NAME server for me. Prepare some static version of webpages (such as "Oops! the server fault") in another hosting platform While google app engine failed, I will switch the destination from google app engine to the static page by change the CNAME records on cloudfare. Is there any other recommand way to solve this situation?

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • Ubuntu in VirtualBox File Modified Time in Future and PHP slow file operations

    - by user1750
    For some reason, some of my files have a last modified date in the future. In addition to this, file operations in PHP are SUPER slow. For example, rebuilding the Symfony2 cache can take over 40 seconds (its takes 1-2 on my MacBook Pro). Notice the time for ListingsCRUDController.php. It just says "2012". In order see the date more clearly I ran ls --time-style="full-iso" -l For some reason it shows that this file's last modified date is ~5 hours into the future. System time: To make things more confusing, the system will intermittently speed up. Suddenly, my app will start serving requests in 1-2 seconds (down from 40 seconds) for no apparent reason. I mean I don't do anything to my code/system config - it just changes. Also, during a slow PHP request, the php5-fpm process (nginx) uses 100% of the CPU for the duration of the request. This is the second VM this has happened on and I need to know why its doing this. It has become unusable. Information About My Setup VirtualBox 4.2.0 Host: Macbook Pro Guest: Ubuntu Server 12.04 Package dkms is installed Timezones match for Ubuntu and PHP. Things I've Tried Both Apache and Nginx. APC enabled and disabled. Xdebug enabled and disabled. 1 processor up to 4 processors. 1gb memory up to 4gb memory. I've installed Ubuntu using the regular kernel and the VM kernel.

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

  • Mac Backup Plan

    - by Chuy77
    I'm reviewing my backup plan and would appreciate any thoughts about what more I should do (if anything) to make sure I'm properly covered in case of all hell breaking loose. :-) I have one machine. 1) I run a nightly clone with SuperDuper. I alternate the clone drive weekly so I have two clones, one never more than a week old. 2) I use BackBlaze as a sort of Time Machine in the cloud. It runs all the time and keeps everything on my machine backed up online. 3) I sync all my 1Password logins, etc. to my iPhone once a week. ...And that's it. I feel pretty covered. But I'm always reading stuff like this: http://www.43folders.com/2010/03/15/yes-another-backup-lecture And that doesn't even mention online backup, and seems like a huge pain in the behind. But maybe I'm being naive? Should I have more backups? Thanks for any feedback. I really appreciate it.

    Read the article

  • Backup plan for linux webserver in small business?

    - by radman
    Hi, I am currently in the process of writing a backup plan for the webserver in use by my business. I am very new to this area and have a few ideas about how things should work but am unsure of what tools to use and what sort of restore process is appropriate. I'm looking for something relatively simplistic and it doesn't have to be 100% paranoid just enough to give me a reliable backup. Speed is not of the essence and there is not going to be a live fallback in place. The backup will be onto a single hdd that will be stored onsite (no option for offsite as yet). Backups will be taking place weekly. I am constrained by both time and money which is why I'm aiming for a good enough solution. Is taking an image of the webserver system drive periodically and using that as the backup appropriate? Should I be testing that the backups restore correctly every time that I perform one? This is a bit broad but what setup would you use if you were in my place, given the services I am running? Should I add additonal machines and split the services? Any advice is much appreciated! See below for server details Webserver Platform Linux Ubuntu server Running mail-server svn-server mediawiki wordpress apache-webserver Hardware single 500gb sata drive Architecture Single machine behind router (with firewall) accessible to the internet.

    Read the article

  • Disk operations in windows 7 are slow

    - by Skadlig
    My computer started lagging last Sunday. I tried to reboot it and it failed. Trying to boot into failsafe mode takes around two hours. It mainly freezes on two files: scsiport.sys and classpnp.sys When it finally has started all disc operations are really slow. When it has run for a while it goes faster, probably due to data moved into RAM instead. It froze on an other file before that was associated with Avast but uninstalling it didn't really help. A critical windows update was installed on Sunday but rolling back the update didn't help. I had a guess about the sound card but disabling the sound card drivers also didn’t help. I have an inkling of an idea that it might be Intel rapid storage technology that might be acting up but it doesn't allow me to reinstall it from failsafe mode and I haven't been able to log into normal mode for a while. I would appreciate suggestions regarding how to get into normal mode again and/or what can be the root cause.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • j2me MIDP: detecting if phone has a data plan

    - by SB
    Is there a way to determine what kind of data plan a device has so an app provides a less rich experience if a data plan is not available? I imagine the connector factory would still be able to return me an HTTPConnection but it would cost the user serious money for lots of data, and I'd like to be nice and prevent that. I thought there would be a way to query device capabilities in the MIDP API, but maybe it's in CLDC?

    Read the article

  • Sort algorithm with fewest number of operations

    - by luvieere
    What is the sort algorithm with fewest number of operations? I need to implement it in HLSL as part of a pixel shader effect v2.0 for WPF, so it needs to have a really small number of operations, considering Pixel Shader's limitations. I need to sort 9 values, specifically the current pixel and its neighbors.

    Read the article

  • Web Hosting Plan

    - by Laith J
    I'm looking for a new web hosting service, currently I'm using GoDaddy's Economy plan but it offers only PHP server-side scripting. For my next project it looks like I'm gonna need to use Java. Currently I pay ~60 USD per year for both the domain and the web hosting plan. Anyone knows of a web hosting service that supports Java and isn't much more expensive than this? Thanks, God bless

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >