Search Results

Search found 19002 results on 761 pages for 'oracle b2b 11g practice'.

Page 288/761 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Multiset of shared_ptrs as a dynamic priority queue: Concept and practice

    - by Sarah
    I was using a vector-based priority queue typedef std::priority_queue< Event, vector< Event >, std::greater< Event > > EventPQ; to manage my Event objects. Now my simulation has to be able to find and delete certain Event objects not at the top of the queue. I'd like to know if my planned work-around can do what I need it to, and if I have the syntax right. I'd also like to know if dramatically better solutions exist. My plan is to make EventPQ a multiset of smart pointers to Event objects: typedef std::multi_set< boost::shared_ptr< Event > > EventPQ; I'm borrowing functions of the Event class from a related post on a multimap priority queue. // Event.h #include <cstdlib> using namespace std; #include <set> #include <boost/shared_ptr.hpp> class Event; typedef std::multi_set< boost::shared_ptr< Event > > EventPQ; class Event { public: Event( double t, int eid, int hid ); ~Event(); void add( EventPQ& q ); void remove(); bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } bool operator > ( const Event & rhs ) const { return ( time > rhs.time ); } double time; int eventID; int hostID; EventPQ* mq; EventPQ::iterator mIt; }; // Event.cpp Event::Event( double t, int eid, int hid ) { time = t; eventID = eid; hostID = hid; } Event::~Event() {} void Event::add( EventPQ& q ) { mq = &q; mIt = q.insert( boost::shared_ptr<Event>(this) ); } void Event::remove() { mq.erase( mIt ); mq = 0; mIt = EventPQ::iterator(); } I was hoping that by making EventPQ a container of pointers, I could avoid wasting time copying Events into the container and avoid accidentally editing the wrong copy. Would it be dramatically easier to store the Events themselves in EventPQ instead? Does it make more sense to remove the time keys from Event objects and use them instead as keys in a multimap? Assuming the current implementation seems okay, my questions are: Do I need to specify how to sort on the pointers, rather than the objects, or does the multiset automatically know to sort on the objects pointed to? If I have a shared_ptr ptr1 to an Event that also has a pointer in the EventPQ container, how do I find and delete the corresponding pointer in EventPQ? Is it enough to .find( ptr1 ), or do I instead have to find by the key (time)? Is the Event::remove() sufficient for removing the pointer in the EventPQ container? There's a small chance multiple events could be created with the same time (obviously implied in the use of multiset). If the find() works on event times, to avoid accidentally deleting the wrong event, I was planning to throw in a further check on eventID and hostID. Does this seem reasonable? (Dumb syntax question) In Event.h, is the declaration of dummy class Event;, then the EventPQ typedef, and then the real class Event declaration appropriate? I'm obviously an inexperienced programmer with very spotty background--this isn't for homework. Would love suggestions and explanations. Please let me know if any part of this is confusing. Thanks.

    Read the article

  • What is the best practice to segment c#.net projects based on a single base project

    - by Anthony
    Honestly, I can't word my question any better without describing it. I have a base project (with all its glory, dlls, resources etc) which is a CMS. I need to use this project as a base for othe custom bake projects. This base project is to be maintained and updated among all custom bake projects. I use subversion (Collabnet and Tortise SVN) I have two questions: 1 - Can I use subversion to share the base project among other projects What I mean here is can I "Checkout" the base project into another "Checked Out" project and have both update and commit seperatley. So, to paint a picture, let's say I am working on a custom project and I modify the core/base prject in some way (which I know will suit the others) can I then commit those changes and upon doing so when I update the base project in the other "Checked out" resources will it pull the changes? In short, I would like not to have to manually deploy updated core files whenever I make changes into each seperate project. 2 - If I create a custom file (let's say an webcontrol or aspx page etc) can I have it compile seperatley from the base project Another tricky one to explain. When I publish my web application it creates DLLs based on the namespaces of projects attached to it. So I may have a number of DLLs including the "Website's" namespace DLL, which could simply be website. I want to be able to make a seperate, custom, control which does not compile into those DLLs as the custom files should not rely on those DLLS to run. Is it as simple to set a seperate namespace for those files like CustomFiles.ProjectName for example? Think of the whole idea as adding modules to the .NET project, I don't want the module's code in any of the core DLLs but I do need for module to be able to access the core dlls. (There is no need for the core project to access the module code as it should be one way only in theory, though I reckon it woould not be possible anyway without using JSON/SOAP or something like that, maybe I am wrong.) I want to create a pluggable environment much like that of Joomla/Wordpress as since PHP generally doesn't have to be compiled first I see this is the reason why all this is possible/easy. The idea is to allow pluggable themes, modules etc etc. (I haven't tried simply adding .NET themes after compile/publish but I am assuming this is possible anyway? OR does the compiler need to reference items in the files?)

    Read the article

  • What's the best way/practice to get the extension of a uploaded file in PHP

    - by Roland
    I have a form that allow users to upload files, I however need to get the file extension, which I am able to get, but not sure if I'm using the most effective solution I can get it using the following ways $fileInfo = pathinfo($_FILES['File']['name']); echo $fileInfo['extension']; $ext = end(explode('.',$_FILES['File']['name'])); echo $ext; Which method is the best to use or are there even better solutions that would get the extension?

    Read the article

  • SQL Query - 20mil records - Best practice to return information

    - by eqiz
    I have a SQL database that has the following table: Table: PhoneRecords ID(identity Seed) FirstName LastName PhoneNumber ZipCode Very simple straight forward table. This table has over 20million records. I am looking for the best way to do queries that pull out records based off area codes from the table. For instance here is an example query that I have done. SELECT phonenumber, firstname FROM [PhoneRecords] WHERE (phone LIKE '2012042%') OR (phone LIKE '2012046%') OR (phone LIKE '2012047%') OR (phone LIKE '2012083%') OR (phone LIKE '2012088%') OR (phone LIKE '2012841%') As you can see this is an ugly query, but it would get the job done (if I wasn't running into timeout issues) Can anyone tell me the best way for speed/optimization to do the above query to display the results? Currently that query above takes around 2 hours to complete on a 9gb 1600mhz ram, i7 930 quadcore OC'd 4.01ghz. I obviously have the computer power required to do such a query, but still takes too long for queries.

    Read the article

  • Opinions regarding C++ programming practice

    - by Sagar
    I have a program that I am writing, not too big. Apart from the main function, it has about 15 other functions that called for various tasks at various times. The code works just fine all in one file, and as it is right now. However, I was wondering if anyone had any advice on whether it is smarter/more efficient/better programming to put those functions in a separate file different from where main is, or whether it even matters at all. If yes, why? If no, why not? I am not new at C++, but definitely not an expert either, so if you think this question is stupid, feel free to tell me so. Thanks for your time!

    Read the article

  • dynamic searchable fields, best practice?

    - by boblu
    I have a Lexicon model, and I want user to be able to create dynamic feature to every lexicon. And I have a complicate search interface that let user search on every single feature (including the dynamic ones) belonged to Lexicon model. I could have used a serialized text field to save all the dynamic information if they are not for searching. In case I want to let user search on all fields, I have created a DynamicField Model to hold all dynamically created features. But imagine I have 1,000,000,000 lexicon, and if one create a dynamic feature for every lexicon, this will result creating 1,000,000,000 rows in DynamicField model. So the sql search function will become quite inefficient while a lot of dynamic features created. Is there a better solution for this situation? Which way should I take? searching for a better db design for dynamic fields try to tuning mysql(add cache fields, add index ...) with current db design

    Read the article

  • M2M Solutions: The Move to Value Creation and the Internet of Things

    - by Javier Puerta
    There's a new Oracle-sponsored report available around big data, specifically machine to machine data (there will probably be more growth in m2m data than human-generated stuff like social media). Forbes published an article, Big Data Set to Explode as 40 Billion New Devices Connect to Internet, which references the report. Login to Download the M2M Solutions Report Good reading!

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Are You Meeting Social Customer Service Expectations?

    - by Mike Stiles
    Whether it’s B2B or B2C, one sure path to repeat business is making sure your buyer has a memorably pleasant and successful customer service experience with you. If they get that kind of treatment consistently, that’s called a relationship. And those aren’t broken easily. Social customer service, driven by integrated SRM (social relationship management) technology, is the venue that can effectively connect customers not only to the brand, but to other customers. Positive experiences, once administered, don’t just rest with the recipient. They’re published in the form of public raves and peer-to-peer recommendation, a force far more actionable than push advertising. What’s more, your customers have come to expect access to you and satisfaction from you using social. An NM Incite study shows 83% of Twitter users and 71% of Facebook users expect to get an answer from brands the same day they post to them on their social assets. To make sure you’re responding, you’ve got to have a tech platform that’s set up to moderate and alert so you’ll know ASAP a customer needs help. The more integrated your social enterprise is, the faster you can not only respond, but respond with the answer they’re looking for, because your system is connected to the internal resources that can surface the answer or put wheels in motion to rectify the situation in the shortest amount of time possible. But if you go to the necessary lengths to make sure your customers feel valued and important, will they really reward you? The study says 71% of consumers who got quick and effective responses from companies they contacted via social were more likely to recommend the brand to their friends and followers. So yes, sweeping people off their feet pays big dividends in terms of word-of-mouth marketing. But you should be keenly aware of the reverse side of that coin. Give people a negative experience, either in real world or virtual customer service, and that message is highly likely to get amplified through social channels faster and louder. Only 36% of the NM Incite study’s respondents reported that their problems were solved quickly and effectively. 36%? That’s hardly an impressive number. It gets worse. 10% never got so much as a response - at all. Going back to the relationship analogy, companies that are this deep in the ditch where customer service is concerned are making their girl or boyfriends really easy for a competitor to steal. Given the technology tools and data available right now for having an intimate knowledge of the customer, what products they’ve purchased, likely problems with those products, effective resolutions to those problems, and follow-up communication to gauge satisfaction, there are fewer excuses than ever for making the lifeblood of your business feel like you couldn’t care less. @mikestiles

    Read the article

  • E-Business Suite 11.5.10 Fenntartó Támogatással és 12.1 Meghosszabbított Támogatással kapcsolatos külön bejelentés

    - by user552636
    Igaz, az idei Oracle Open World (OOW) már régen volt, de akkor ez a blog még nem létezett. Ugyanakkor azóta többektol kaptam kérdést az OOW-n tett E-Business Suite támogatással kapcsolatos bejelentés értelmezésére vonatkozóan. Ezért gondoltam, hasznos lehet a magyar felhasználók számára, ha írok pár sort a bejelentésrol. Az E-Business Suite (EBS) 11.5.10 verzióhoz kapcsolódó bejelentés: Az Oracle Élettartam Támogatási modellje szerint ez a verzió általánosan 2004. novembertol volt elérheto, melyre az Oracle 2010. november 30-ig biztosított Premier Támogatást, 2010. December 1-tol 2013. november végéig pedig Meghosszabbított Támogatást nyújt. Jövo év december 1-tol az EBS 11.5.10 verzió Fenntartó Támogatás szakaszba kerül. Fenntartó Támogatás szakaszban az újonnan felfedezett bug-okat már nem javítja a Fejlesztés. A bejelentés szerint Oracle a 11.5.10 esetében kivételt tesz és a 2013. december 1-tol 2014. november 30-ig terjedo idoszakban az éles üzemet érinto 1-es súlyossági szintu problémák esetében biztosítani fogja új hibák javítását is. Amire ügyelni érdemes, a rendszer a  Doc ID 883202.1 My Oracle Support dokumentumban részletezett minimum patch szinten kell legyen.   Ez a plusz szolgáltatás nem befolyásolja a támogatási díjat. Az E-Business Suite (EBS) 12.1 verzióhoz kapcsolódó bejelentés   Az EBS 12.1 verzió Meghosszabbított támogatásának eredetileg mehírdetett idoszaka 2014. május 1. – 2017. április 30. volt. Oracle ezt az idoszakot 19 hónappal megtoldotta, így ennek a verziónak a Meghosszabbított Támogatása 2018. december 31-ig tart majd. További jó hír e verziót használó Ügyfeleink számára, hogy a Meghosszabbított Támogatás emelt díjától az Oracle eltekint. Standard Oracle árazás szerint a Meghosszabbított Támogatás elso évében a szolgáltatási díj a Premier Támogatás díjának 110%-a, a második, ill. harmadik években a Premier Támogatás díjának 120%-a. Oracle jelen esetben a plusz 10%-tól, ill. a plusz 20 %-tól eltekint. Íly módon a Meghosszabbított Támogatást Oracle a Premier Támogatás díjért biztosítja majd. Amire ügyelni érdemes, a rendszer a  Doc ID 1195034.1 My Oracle Support dokumentumban részletezett minimum patch szinten kell legyen.   Az idoszakok egyszerubb megértése érdekében az alábbi grafikonon ábrázoltam a szóban forgó verziók támogatásának egyes szakaszait.  

    Read the article

  • New Interaction Hub Statement of Direction Published

    - by Matthew Haavisto
    The latest PeopleSoft Interaction Hub Statement of Direction is now available on My Oracle Support.  We think this subject will be particularly interesting to customers given the impending release of the PeopleSoft Fluid User Experience and all that offers.  The Statement of Direction describes how we see the Interaction Hub being used with the new user experience and the Hub's continued place in a PeopleSoft environment.  This paper also discusses subjects like branding, content management, easier design/deployment, and the optional restricted use license.

    Read the article

  • E-Book on big data (featuring Analysts, Customers and more)

    - by Jean-Pierre Dijcks
    As we are gearing up for Openworld, here is a nice E-book on big data to start paging through. It contains Gartner's take on big data, customer and partner interviews and a lot more good info. Enjoy the read so you come prepared for Openworld!! Read the E-Book here. For those coming to Oracle Openworld (or the Americas Cup races around the same time), you can find big data sessions via this URL. Enjoy!!

    Read the article

  • Latest Fusion DOO White Paper - Overcoming Order Management Complexity in Global Organizations

    - by Pam Petropoulos
    Check out this latest Fusion Distributed Order Orchestration white paper entitled “Overcoming Order Management Complexity in Global Organizations”.  Discover how Oracle Fusion DOO enables large, complex organizations to streamline their order management processes and take advantage of lower costs, higher margins, and improved customer service. Click here to read the whitepaper.

    Read the article

  • Is it certified and supported to install Exalytics Management pack on Exalytics server with OVS ?

    - by Saresh
    Q: Is it certified and supported to install Exalytics Management pack on Exalytics server with OVS ?  A: BI Management pack can certainly be used to manage Exalytics and BI targets.However,It is not supported to install an EM agent on dom0, the monitoring agents have to be installed on the guests.Please refer  http://docs.oracle.com/cd/E24628_01/install.121/e24215/exalytics_mgmt.htm#BABGDIIE

    Read the article

  • Learn Prolog Now! DCG Practice Example

    - by Timothy
    I have been progressing through Learn Prolog Now! as self-study and am now learning about Definite Clause Grammars. I am having some difficulty with one of the Practical Session's tasks. The task reads: The formal language anb2mc2mdn consists of all strings of the following form: an unbroken block of as followed by an unbroken block of bs followed by an unbroken block of cs followed by an unbroken block of ds, such that the a and d blocks are exactly the same length, and the c and d blocks are also exactly the same length and furthermore consist of an even number of cs and ds respectively. For example, ε, abbccd, and aaabbbbccccddd all belong to anb2mc2mdn. Write a DCG that generates this language. I am able to write rules that generate andn, b2mc2m, and even anb2m and c2mndn... but I can't seem to join all these rules into anb2mc2mdn. The following are my rules that can generate andn and b2mc2m. s1 --> []. s1 --> a,s1,d. a --> [a]. d --> [d]. s2 --> []. s2 --> c,c,s2,d,d. c --> [c]. d --> [d]. Is anb2mc2mdn really a CFG, and is it possible to write a DCG using only what was taught in the lesson (no additional arguments or code, etc)? If so, can anyone offer me some guidance how I can join these so that I can solve the given task?

    Read the article

  • Best Practice: Accessing radio button group

    - by seth
    Looking for the best, standards compliant, cross browser compatible, forwards compatible way to access a group of radio buttons in the DOM (this is closely related to my other most recent post...), without the use of any external libraries. <input type="radio" value="1" name="myRadios" />One<br /> <input type="radio" value="2" name="myRadios" />Two<br /> I've read conflicting information on getElementsByName(), which seems like the proper way to do this, but I'm unclear if this is a standards compliant, forwards compatible solution. Perhaps there is a better way?

    Read the article

  • Are protected constructors considered good practice?

    - by Álvaro G. Vicario
    I'm writing some little helper classes to handle trees. Basically, I have a node and a special root node that represents the tree. I want to keep it generic and simple. This is part of the code: <?php class Tree extends TreeNode{ public function addById($node_id, $parent_id, $generic_content){ if( $parent = $this->findNodeById($parent_id) ){ $parent->addChildById($node_id, $generic_content); } } } class TreeNode{ public function __construct($node_id, $parent_id, $generic_content){ // ... } protected function addChildById($node_id, $generic_content){ $this->children[] = new TreeNode($this->node_id, $node_id, $generic_content); } } $Categories = new Tree; $Categories->addById(1, NULL, $foo); $Categories->addById(2, NULL, $bar); $Categories->addById(3, 1, $gee); ?> My questions: Is it sensible to force TreeNode instances to be created through TreeNode::addById()? If it's so, would it be good practise to declare TreeNode::__construct() as private/protected?

    Read the article

  • [EF + Oracle]Object Context

    - by JTorrecilla
    Prologue After EF episodes I and II, we are going to see the Object Context. What is Object Context? It is a class which manages the DB connection, and the different Entities of our model. When Visual Studio creates the EF model, like I explain previously, also generates a Class that extends ObjectContext. ObjectContext provides: - DB connection - Add, update and delete functions. - Object Sets of Entities. - State of Pending Changes. This class will give a function, for each Entity, like  Esta clase va a contar con una función, para cada entidad, del tipo “AddTo{ENTITY}({Entity_Type } value)”, which are going to add a Entity to the related ObjectSet. In addition, it has a property, for each Entity, like “ObjectSet<TEntity> Entity”, does will keep the related record set. It will be filled with the CreateObjectSet<TEntity> function of Base class (ObjectContext). What is an ObjectSet? It is a class that allows us to manage the Entity Set from a Type. It inherits from: · ObjectQuery<TEntity> · IObjectSet<TEntity> · IQueryAble<TEntity · IEnumerable<TEntity · IQueryAble · IEnumerable An ObjectSet is a class property that allows query, insert, delete and update records from a determinate Entity. In following chapters we will see how to query Entities. LazyLoadingEnabled A very important property of the Context is “LazyLoadingEnabled”. This Boolean property lets indicate if the data loading is lazy, in other words, the Object will not be created and query until not be needed. Finally In this post we have seen what the VS generated context is, some of the characteristics, and where to see Entity data. In next chapters we will see, CRUD operations, and how to query ObjectSets.

    Read the article

  • Why does Oracle SQL Developer take so long to open?

    - by oscilatingcretin
    I think anyone who's used Oracle SQL Developer will agree that it's painfully slow on the load. My research has lead me to a solution that seems to have helped a little, and that's telling OSQLD not to check for updates on startup. However, it still takes several minutes to open. What could OSQLD possibly be doing during load time? Is there any way get it to open right away? Edit: Adding potentially relevant system specs: CPU: Intel i5-2520M 2.5 ghz Windows 7 32-bit RAM: 4 gb

    Read the article

  • Best practice for using Wcf service by silverlight?

    - by bonefisher
    How would you structure the code for calling a wcf service in silverlight application? Using only-once instanciated wcf service-proxy (aka singleton) and using it across the whole SL app? If so, how did you solve the unsubscribing controls from ws-call-completed event? or creating the wcf service-proxy for each ws-call? Where do you close the proxy then?

    Read the article

  • What is the best practice for accessing Model using MVVM pattern

    - by Dzenand
    I have a database that communicates with webservices with my Model (own thread) and exposes Data Objects. My UI application consists of different Views and ViewModels and Custom Controls. I'm using ServiceProvider (IServiceProvider) to access the Model and route the events to the UI thread. Communication between the ViewModels is handeled by a Messenger. Is this way to go? I was also wondering what is the best way to strucutre the DataObjects At the moment i have the DataObjects that have a hierarchy structure but does not support INotifyProperty though the children list are of type of ObservableCollection. I have no possiblity to implement notifypropertychange on the properties. I was wondering the best way of making them MVVM friendly. Implementing a partial class and adding all the properties or commands that are necessary or wrapping all the DataObjects and keep the Model list and MVVM list in sync. All thoughts and ideas are appreciated.

    Read the article

  • Sorry For The Short Notice! November Deep Dive Demo Invitations

    - by KemButller
    If you would like to get a deep dive overview and demo of two of JD Edwards hottest products in the privacy of your own office, you are in luck!  The Oracle sales team invites you to attend their on-line seminars covering EnterpriseOne One View Reporting and EnterpriseOne Health and Safety Incident Management. You can get the details and register via these links. EnterpriseOne One View Reporting - November 13  EnterpriseOne Health and Safety Incident Management - November 20 

    Read the article

  • python list/dict property best practice

    - by jterrace
    I have a class object that stores some properties that are lists of other objects. Each of the items in the list has an identifier that can be accessed with the id property. I'd like to be able to read and write from these lists but also be able to access a dictionary keyed by their identifier. Let me illustrate with an example: class Child(object): def __init__(self, id, name): self.id = id self.name = name class Teacher(object): def __init__(self, id, name): self.id = id self.name = name class Classroom(object): def __init__(self, children, teachers): self.children = children self.teachers = teachers classroom = Classroom([Child('389','pete')], [Teacher('829','bob')]) This is a silly example, but it illustrates what I'm trying to do. I'd like to be able to interact with the classroom object like this: #access like a list print classroom.children[0] #append like it's a list classroom.children.append(Child('2344','joe')) #delete from like it's a list classroom.children.pop(0) But I'd also like to be able to access it like it's a dictionary, and the dictionary should be automatically updated when I modify the list: #access like a dict print classroom.childrenById['389'] I realize I could just make it a dict, but I want to avoid code like this: classroom.childrendict[child.id] = child I also might have several of these properties, so I don't want to add functions like addChild, which feels very un-pythonic anyway. Is there a way to somehow subclass dict and/or list and provide all of these functions easily with my class's properties? I'd also like to avoid as much code as possible.

    Read the article

  • Extend legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradeable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on IIS with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >