Search Results

Search found 58486 results on 2340 pages for 'data integrator'.

Page 846/2340 | < Previous Page | 842 843 844 845 846 847 848 849 850 851 852 853  | Next Page >

  • Is it possible to preview arbitrary formats in Nautilus?

    - by alfC
    I recently found out that Nautilus (Ubuntu 12.04 at least) can show thumbnails of files of non-image formats, for example (data grapher) grace files (.agr) shows a small version of the graph contained in its data. Obviously, there some library or script that is processing the file, making the image, and allowing nautilus to show a small version of it. This made me think that in principle any file that potentially can be processed into an image can serve as a Nautilus thumbnail. For example, a .tex file (which can be converted to .pdf) or a gnuplot script can be displayed as a thumbnail when possible. In the case of .tex file, the correspoding .pdf can be created by the command pdflatex file.tex. The question is, how can I tell Nautilus to create a thumbnail for an arbitrary format and how do I specify the commands to do so within Nautilus?

    Read the article

  • Management Software in Java for Networked Bus Systems

    - by Geertjan
    Telemotive AG develops complex networked bus systems such as Ethernet, MOST, CAN, FlexRay, LIN and Bluetooth as well as in-house product developments in infotainment, entertainment, and telematics related to driver assistance, connectivity, diagnosis, and e-mobility. Devices such as those developed by Telemotive typically come with management software, so that the device can be configured. (Just like an internet router comes with management software too.) The blue AdmiraL is a development and analysis device for the APIX (Automotive Pixel Link) technology. Here is its management tool: The blue PiraT is an optimised multi-data logger, developed by Telemotive specifically for the automotive industry. With the blue PiraT the communication of bus systems and control units are monitored and relevant data can be recorded very precisely. And here is how the tool is managed: Both applications are created in Java and, as clearly indicated in many ways in the screenshots above, are based on the NetBeans Platform. More details can be found on the Telemotive site.

    Read the article

  • Thoughts on schemas and schema proliferation

    - by jamiet
    In SQL Server 2005 Microsoft introduced user-schema separation and since then I have seen the use of schemas increase; whereas before I would typically see databases where all objects were in the [dbo] schema I now see databases that have multiple schemas, a database I saw recently had 31 (thirty one) of them. I can’t help but wonder whether this is a good thing or not – clearly 31 is an extreme case but I question whether multiple schemas create more problems than they solve? I have been involved in many discussions that go something like this: Developer #1> “I have a new function to add to the database and I’m not sure which schema to put it in” Developer #2> “What does it do?” Developer #1> “It provides data to a report in Reporting Services” Developer #2> “Ok, so put it in the [reports] schema” Developer #1> “Well I could, but the data will only be used by our Financial reporting folks so shouldn’t I put it in the [financial] schema?” Developer #2> “Maybe, yes” Developer #1> “Mind you, the data is supposed to be used for regulatory reporting to the FSA, should I put it in [regulatory]?” Developer #2> “Err….” You get the idea!!! The more schemas that exist in your database then the more chance that their supposed usages will overlap. I’m left wondering whether the use of schemas is actually necessary. I don’t view really see them as an aid to security because I generally believe that principles should be assigned permissions on objects as-needed on a case-by-case basis (and I have a stock SQL query that deciphers them all for me) so why bother using them at all? I can envisage a use where a database is used to house objects pertaining to many different business functions (which, in itself, is an ambiguous term) and in that circumstance perhaps a schema per business function would be appropriate; hence of late I have been loosely following this edict: If some objects in a database could be moved en masse to another database without the need to remove any foreign key constraints then those objects could legitimately exist in a dedicated schema. I am interested to know what other people’s thoughts are on this. If you would like to share then please do so in the comments. @Jamiet

    Read the article

  • How to migrate from Banshee to Rhythmbox?

    - by rafalcieslak
    As it has been decided, Ubuntu Precise 12.04 will feature Rhythmbox as the default music player. I am aware, that it does not mean that I will not be able to use Banshee, nevertheless I would like to switch to it. I have been a Rhythmbox fan for a long time, but after the switch to Banshee in Natty I decided to give it a try and completely migrated to it. However, I am not very happy with it, it lags for me a lot and has some other issues. I would like to export all Banshee data to Rhythmbox. That includes: Music library Playlists Preferably playcounts and ratings Radio stations Cover pictures What should I do to move all this data to Rhythmbox, get it to work as the default music player, and smoothly switch completely to it?

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

  • determine if udp socket can be accessed via external client

    - by JohnMerlino
    I don't have access to company firewall server. but supposedly the port 1720 is open on my one ubuntu server. So I want to test it with netcat: sudo nc -ul 1720 The port is listening on the machine ITSELF: sudo netstat -tulpn | grep nc udp 0 0 0.0.0.0:1720 0.0.0.0:* 29477/nc The port is open and in use on the machine ITSELF: lsof -i -n -P | grep 1720 gateway 980 myuser 8u IPv4 187284576 0t0 UDP *:1720 Checked the firewall on current server: sudo ufw allow 1720/udp Skipping adding existing rule Skipping adding existing rule (v6) sudo ufw status verbose | grep 1720 1720/udp ALLOW IN Anywhere 1720/udp ALLOW IN Anywhere (v6) But I try echoing data to it from another computer (I replaced the x's with the real integers): echo "Some data to send" | nc xx.xxx.xx.xxx 1720 But it didn't write anything. So then I try with telnet from the other computer as well: telnet xx.xxx.xx.xxx 1720 Trying xx.xxx.xx.xxx... telnet: connect to address xx.xxx.xx.xxx: Operation timed out telnet: Unable to connect to remote host Although I don't think telnet works with udp sockets. I ran nmap from another computer within the same local network and this is what I got: sudo nmap -v -A -sU -p 1720 xx.xxx.xx.xx Starting Nmap 5.21 ( http://nmap.org ) at 2013-10-31 15:41 EDT NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 15:41 Scanning xx.xxx.xx.xx [4 ports] Completed Ping Scan at 15:41, 0.10s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 15:41 Completed Parallel DNS resolution of 1 host. at 15:41, 0.00s elapsed Initiating UDP Scan at 15:41 Scanning xtremek.com (xx.xxx.xx.xx) [1 port] Completed UDP Scan at 15:41, 0.07s elapsed (1 total ports) Initiating Service scan at 15:41 Initiating OS detection (try #1) against xtremek.com (xx.xxx.xx.xx) Retrying OS detection (try #2) against xtremek.com (xx.xxx.xx.xx) Initiating Traceroute at 15:41 Completed Traceroute at 15:41, 0.01s elapsed NSE: Script scanning xx.xxx.xx.xx. NSE: Script Scanning completed. Nmap scan report for xtremek.com (xx.xxx.xx.xx) Host is up (0.00013s latency). PORT STATE SERVICE VERSION 1720/udp closed unknown Too many fingerprints match this host to give specific OS details Network Distance: 1 hop TRACEROUTE (using port 1720/udp) HOP RTT ADDRESS 1 0.13 ms xtremek.com (xx.xxx.xx.xx) Read data files from: /usr/share/nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 2.04 seconds Raw packets sent: 27 (2128B) | Rcvd: 24 (2248B). The only thing I can think of is a firewall or vpn issue. Is there anything else I can check for before requesting that they look at the firewall server again?

    Read the article

  • Can see samba shares but not access them

    - by nitefrog
    For the life of me I cannot figure this one out. I have samba installed and set up on the ubuntu box and on the Win7 box I CAN SEE all the shares I created. I created two users on ubuntu that map to the users in windows. On ubuntu they are both admins, user A & B on Windows User A is admin and user B is poweruser. User A can see both shares and access them, but user B can see everythin, but only access the homes directory, the other directory throws an error. I have two drives in Ubuntu and this is the smb.config file (I am new to samba): [global] workgroup = WORKGROUP server string = %h server (Samba, Ubuntu) wins support = no dns proxy = yes name resolve order = lmhosts host wins bcast log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ; usershare max shares = 100 usershare allow guests = yes And here is the share section: Both user A & B can access this from windows. No problems. [homes] comment = Home Directories browseable = no writable = yes Both User A & B can see this share, but only user A can access it. User B get an error thrown. [stuff] comment = Unixmen File Server path = /media/data/appinstall/ browseable = yes ;writable = no read only = yes hosts allow = The permission for the media/data/appinstall/ is as follows: appInstall properties: share name: stuff Allow others to create and delete files in this folder is cheeked Guest access (for people without a user account) is checked permissions: Owner: user A Folder Access: Create and delete files File Access: --- Group: user A Folder Access: Create and delete files File Access: --- Others Folder Access: Create and delete files File Access: --- I am at a loss and need to get this work. Any ideas? The goal is to have a setup like this. 3 users on window machines. Each user on the data drive will have their own personal folder where they are the ones that can only access, then another folder where 2 of the users will have read only and one user full access. I had this setup before on windows, but after what happened I am NEVER going back to windows, so Unix here I am to stay! I am really stuck. I am running Ubuntu 11. I could reformat again and put on version 10 if that would make life easier. I have been dealing with this since Wed. 3pm. Thanks.

    Read the article

  • UPK Pre-Built Content Update

    - by Karen Rihs
    UPK pre-built content development efforts are always underway and growing. Over the last few months, the following new and upgraded modules became available:  NEW CONTENT RELEASES E-Business Suite 12.1 Field Service Manufacturing Operations Center Process Manufacturing:  System Administration Strategic Network Optimization U.S. Federal Financials Oracle Communications 11.1 Oracle Communications UPK for Pricing Design Center, Voice and Data Offerings Oracle Mobile Workforce 2.1.0 Administrative Setup User Tasks Primavera Primavera Portfolio Management 9.0 UPK CONTENT UPGRADES JDE E1 9.1 HCM Fundamentals for EnterpriseOne Manufacturing - Product Data Management Manufacturing Management Discrete Shop Floor Management Procurement and Subcontract Management JDE World A9.3 Accounts Payable Address Book  Common Foundation General Ledger For a list of modules currently available for each product line, visit the UPK Resource Library on Oracle.com. For more information on how your organization can take advantage of UPK pre-built content, see our previous blog,  The Value of UPK Pre-Built Content. - Karen Rihs, UPK Outbound Product Management

    Read the article

  • At which architecture level are you running BDD tests (e.g. Cucumber)

    - by Pete
    I have in the last year gotten quite fond of using SpecFlow (which is a .NET port of Cucumber) I have used it both to test a ASP.NET MVC application at the web layer, i.e. using browser automation, but also at the controller layer. The first gives me a higher confidence in the correctness of the application, because JavaScript is tested, and improper controller configuration is also caught. But those tests are slower to execute, and more complex to implement, than those just testing on the controller layer. My tests are full functional tests, i.e. they exercise all layers of the application, all the way down to the database. So the first thing before any scenario is that the database is cleared of data, allowing the test to assume that only data specified in the "Given" block exists. Then I see example on how to use it, where they test just exercise the model layer. So what are your experiences with these tools? Which layer of the application do you test?

    Read the article

  • Algorithmic Forecasting and Pattern Recognition

    - by Ryan King
    Say a user could enter project data into my software. Each project has 2 variables "size" and "work" and they're related but the relationship is not known. Is there a way to programmatically determine the relationship between the variables based on previous data and forecast the amount of work provided if only given the size of the project in the future? For Example, say the user had manually entered the following projects. Project 1 - Size:1, Work: 4 Project 2 - Size:2, Work: 7 Project 3 - Size:3, Work: 10 Project 4 - Size:4, Work: x What should I look into to be able to programmatically determine, that Work = Size*3+1 and therefor be able to say that x=13?

    Read the article

  • Participez aux ateliers de certification Oracle à Paris le 30 octobre & 09 novembre 2012

    - by mseika
    Participez aux ateliers de certification Oracle à Paris le 30 octobre & 09 novembre 2012 Remportez la préférence de vos clients et prospects grâce à vos spécialisations Oracle ! Dans la continuité de votre démarche vers la certification Oracle, nous vous proposons 2 demi-journées "spéciale ateliers de certification" à Paris. Réservez votre matinée du 30 octobre ou du 09 novembre prochains pour passer les certifications indispensables à votre entreprise pour être spécialisée.Les ateliers auront lieu à Paris Saint Lazare de 9h à 12h30 au :Centre M2i20 rue d'Athènes75009 PARISNe manquez pas cette occasion, de nombreux ateliers au choix vous sont proposés. Attention, le nombre de places est limité. Programme des ateliers de certifications :- Oracle Software : Oracle Database 11g, Database Security, Data Integration, Data Warehousing, Oracle Business Intelligence Foundation, Exadata Database Machine, Exalogic Elastic Cloud, SOA... - Oracle Hardware : Oracle Linux, Oracle Solaris, SPARC Entry & Midrange, SPARC T-Series Servers, Stockage Unifié, Virtualisation Les ateliers seront suivis d'un déjeuner. Des pré-requis sont nécessaires pour passer ces examens en ligne.Vérifiez-les

    Read the article

  • How to refactor my design, if it seems to require multiple inheritance?

    - by Omega
    Recently I made a question about Java classes implementing methods from two sources (kinda like multiple inheritance). However, it was pointed out that this sort of need may be a sign of a design flaw. Hence, it is probably better to address my current design rather than trying to simulate multiple inheritance. Before tackling the actual problem, some background info about a particular mechanic in this framework: It is a simple game development framework. Several components allocate some memory (like pixel data), and it is necessary to get rid of it as soon as you don't need it. Sprites are an example of this. Anyway, I decided to implement something ala Manual-Reference-Counting from Objective-C. Certain classes, like Sprites, contain an internal counter, which is increased when you call retain(), and decreased on release(). Thus the Resource abstract class was created. Any subclass of this will obtain the retain() and release() implementations for free. When its count hits 0 (nobody is using this class), it will call the destroy() method. The subclass needs only to implement destroy(). This is because I don't want to rely on the Garbage Collector to get rid of unused pixel data. Game objects are all subclasses of the Node class - which is the main construction block, as it provides info such as position, size, rotation, etc. See, two classes are used often in my game. Sprites and Labels. Ah... but wait. Sprites contain pixel data, remember? And as such, they need to extend Resource. But this, of course, can't be done. Sprites ARE nodes, hence they must subclass Node. But heck, they are resources too. Why not making Resource an interface? Because I'd have to re-implement retain() and release(). I am avoiding this in virtue of not writing the same code over and over (remember that there are multiple classes that need this memory-management system). Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource. I'd still be writing the same code over and over! What is your advice in this situation, then?

    Read the article

  • Existing Instance, Shiny New Disks

    - by merrillaldrich
    Migrating an Instance of SQL Server to New Disks I get to do something pretty entertaining this week – migrate SQL instances on a 2008 cluster from one disk array to another! Zut alors! I am so excited I can hardly contain myself, so let’s get started. (Only a DBA could love this stuff, am I right? I know.) Anyway, here’s one method of many to migrate your data. Assumption : this is a host-based migration, which just means I’m using the Windows file system to push the data from one set of SAN disks...(read more)

    Read the article

  • Google Analytics - TOS section pertaining to privacy

    - by Eike Pierstorff
    The Google Analytics terms of service does do not allow to track "data that personally identifies an individual (such as a name, email address or billing information), or other data which can be reasonably linked to such information by Google". Does anybody have first-hand knowledge if this includes user ids which cannot be resolved by Google but can be linked to actual persons via an Analytics Users CRM system (e.g. a CRM linked to Analytics via API access) ? I used to think so, but if that where the case many ecommerce implementations would be illegal (since they store transactions id which can be linked to client's purchases). If anybody has insights about the intended meaning of the paragraph (preferably with a reliable source) it would be great if he/she could share :-)

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • Driving Growth through Smarter Selling

    - by Samantha.Y. Ma
    With the proliferation of social media and mobile technologies, the world of selling and buying has drastically changed, as buyers now have access to more information than they did in the past. In fact, studies have shown that buyers complete 60 percent of the buying process before they even engage with a salesperson. The old models of selling no longer work effectively; and the new way of selling is driven by customer insights. To succeed, sales need to be proactive, not reactive. They need to engage with the customer early, sometimes even before the customer’s needs are fully understood. In fact, the best sales reps prescribe a solution that the customer doesn't even know they need, often by leveraging social media to listen, engage and collaborate with peers. And they fully tap into the power of analytics and data to drive results.  Let’s look at some stats regarding challenges facing sales today. According to recent studies, sales reps spend 78 percent of their time doing administrative things -- such as planning, searching for information, data entry -- and only 22 percent of the time actually selling. Furthermore, 40 percent of B2B sales reps miss their quota, and only 3 percent of companies can say with confidence that their forecasts are “always accurate.” How do you drive growth in this modern day and age? It's not just getting your sales teams to work harder; it's helping them work smarter and providing them with a solution they want to use, on the device(s) they already know, giving them critical insights and tools to be more productive, increase win rates, and close deals faster. Oracle Sales Cloud was designed to do exactly that. It enables smarter selling that allows reps to sell more, managers to know more, and companies to grow more.  Let’s face it—if all CRM solutions worked well, sales executives wouldn’t be having the same headaches as they had in the past. Join Oracle’s Thomas Kurian and Doug Clemmans on Tuesday, October 22 as they explain: • How today’s sales processes have rendered many CRM systems obsolete • The secrets to smarter selling, leveraging mobile, social, and big data • How Oracle Sales Cloud enables smarter selling—as proven by Oracle and its customers Take the first step down the path toward smarter selling. With Oracle Sales Cloud, reps sell more, managers know more, and companies grow more.

    Read the article

  • December events for Oracle VM

    - by Chris Kawalek
    Where in the world is Oracle VM in December? Whether you are in the US, Asia or the UK, you can find us in December at any of the events below: UK Oracle User Group Conference 2012 Birmingham, United Kingdom December 1st – 5th, 2013 Check out the Oracle Virtualization Strategy and Roadmap Session on December 5 Gartner Data Center Conference 2012 Las Vegas, Nevada, USA December 3rd – 6th, 2013 Visit the Oracle Booth to learn about Oracle VM and Optimized Data Center Solutions. NetApp Insight Sheraton Macau, Macau, China December 11-13, 2013 Oracle VM & NetApp Storage Connect integrated solutions to simplify virtualized infrastructure management

    Read the article

  • installing ubuntu 12.04 along windows xp and windows 7

    - by Anand A J
    I have Windows XP installed on C drive and Windows 7 installed on F drive. I want to install Ubuntu 12.04 alongwith Windows (keeping both XP and 7) in drive G with out losing any data stored in the computer. I have a hard disk of 500 GB size with C (14.8 GB left),D,E,F, and G (15.7 GB left). I tried to install Ubuntu 12.04 from DVD and getting stuck at the time of selecting partitions .! How to select the device for boot loader installation? Will the installation of Ubuntu into G drive affects the data stored in the hard disk or in G drive especially? After installing Ubuntu can I use Windows XP and Windows 7? This is my first attempt to use Ubuntu. Can any body help me please?

    Read the article

  • The Enterprise Architect (EA) diary - day 22 (from business processes to implemented applications)

    - by nattYGUR
    After spending time on keeping our repository up to date (add new ETRM application and related data flows as well as changing databases to DB clusters), collecting more data for the root cause analysis and spending time for writing proposal to creating new software infrastructure team ( that will help us to clean the table from a pile of problems that just keep on growing due to BAU control over IT dev team resources). I spend time to adapt our EA tool to support a diagram flow from high level business processes to implementation of new applications that will better support the business process. http://www.theeagroup.net/ea/Default.aspx?tabid=1&newsType=ArticleView&articleId=195

    Read the article

  • Now Available:Oracle Utilities Customer Self Service Version 2.1

    - by Roxana Babiciu
    The Oracle Utilities Global Business Unit is pleased to announce the general availability of Oracle Utilities Customer Self Service 2.1. It is ready for customers and partners to download and install via the Oracle Software Delivery Cloud. Key Features & Benefits: Oracle Utilities Customer Self Service 2.1 includes several new capabilities and enhancements including significantly improved Commercial Account Management and Advanced Notification Management using a new Oracle Utilities Notification Center module (licensed separately). These include the following: Advanced Notification Management Online Issues and Forms Management • Budget Management and Billing for Billed Budgets Prepaid User Dashboard Enhanced Usage Details Web Presentment Start/Stop/Transfer Service Automation Payment Arrangement Automation Account Sets Management for Large Commercial Customers Multiple Account Usage Data Aggregation, Comparison, and Data Download Multiple Account Financial History Mobile Outage Maps More information can be found on OPN

    Read the article

  • Reading a ZFS USB drive with Mac OS X Mountain Lion

    - by Karim Berrah
    The problem: I'm using a MacBook, mainly with Solaris 11, but something with Mac OS X (ML). The only missing thing is that Mac OS X can't read my external ZFS based USB drive, where I store all my data. So, I decided to look for a solution. Possible solution: I decided to use VirtualBox with a Solaris 11 VM as a passthrough to my data. Here are the required steps: Installing a Solaris 11 VM Install VirtualBox on your Mac OS X, add the extension pack (needed for USB) Plug your ZFS based USB drive on your Mac, ignore it when asked to initialize it. Create a VM for Solaris (bridged network), and before installing it, create a USB filter (in the settings of your Vbox VM, go to Ports, then USB, then add a new USB filter from the attached device "grey usb-connector logo with green plus sign")  Install a Solaris 11 VM, boot it, and install the Guest addition check with "ifconfg -a" the IP address of your Solaris VM Creating a path to your ZFS USB drive In MacOS X, use the "Disk Utility" to unmount the USB attached drive, and unplug the USB device. Switch back to VirtualBox, select the top of the window where your Solaris 11 is running plug your ZFS USB drive, select "ignore" if Mac OS invite you to initialize the disk In the VirtualBox VM menu, go to "Devices" then "USB Devices" and select from the dropping menu your "USB device" Connection your Solaris VM to the USB drive Inside Solaris, you might now check that your device is accessible by using the "format" cli command If not, repeat previous steps Now, with root privilege, force a zpool import -f myusbdevicepoolname because this pool was created on another system check that you see your new pool with "zpool status" share your pool with NFS: share -F NFS /myusbdevicepoolname Accessing the USB ZFS drive from Mac OS X This is the easiest step: access an NFS share from mac OS Create a "ZFSdrive" folder on your MacOS desktop from a terminal under mac OS: mount -t nfs IPadressofMySoalrisVM:/myusbdevicepoolname  /Users/yourusername/Desktop/ZFSdrive et voila ! you might access your data, on a ZFS USB drive, directly from your Mountain Lion Desktop. You might play with the share rights in order to alter any read/write rights as needed. You might activate compression, encryption inside the Solaris 11 VM ...

    Read the article

  • What is (are) the most useful technique/visualization for overall project status?

    - by Wayne Werner
    For reasons "above my pay grade", we're developing an issue/project tracking system where I work (similar to Trac, FogBugz, etc). The managers want a useful tool to be able to track the overall health of the project (e.g. How much time left, how are we performing vs estimates) and one of the features that has been requested is some type of critical path support and visualization. The logic explained to me is that they want to be sure that at least the most important pieces of the project are currently being worked on. The initial idea was that we would create task-based dependencies. My understanding of project management tells me that this kind of granular approach is unnecessary - having milestones with specific deadlines/dependencies is much more useful. I would like to know what are the most useful techniques and "pretty pictures" you've seen/used for project development. Having objective data would be best, but somewhat subjective data is helpful too.

    Read the article

< Previous Page | 842 843 844 845 846 847 848 849 850 851 852 853  | Next Page >