Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 392/537 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Find points whose pairwise distances approximate a given distance matrix

    - by Stephan Kolassa
    Problem. I have a symmetric distance matrix with entries between zero and one, like this one: D = ( 0.0 0.4 0.0 0.5 ) ( 0.4 0.0 0.2 1.0 ) ( 0.0 0.2 0.0 0.7 ) ( 0.5 1.0 0.7 0.0 ) I would like to find points in the plane that have (approximately) the pairwise distances given in D. I understand that this will usually not be possible with strictly correct distances, so I would be happy with a "good" approximation. My matrices are smallish, no more than 10x10, so performance is not an issue. Question. Does anyone know of an algorithm to do this? Background. I have sets of probability densities between which I calculate Hellinger distances, which I would like to visualize as above. Each set contains no more than 10 densities (see above), but I have a couple of hundred sets. What I did so far. I did consider posting at math.SE, but looking at what gets tagged as "geometry" there, it seems like this kind of computational geometry question would be more on-topic here. If the community thinks this should be migrated, please go ahead. This looks like a straightforward problem in computational geometry, and I would assume that anyone involved in clustering might be interested in such a visualization, but I haven't been able to google anything. One simple approach would be to randomly plonk down points and perturb them until the distance matrix is close to D, e.g., using Simulated Annealing, or run a Genetic Algorithm. I have to admit that I haven't tried that yet, hoping for a smarter way. One specific operationalization of a "good" approximation in the sense above is Problem 4 in the Open Problems section here, with k=2. Now, while finding an algorithm that is guaranteed to find the minimum l1-distance between D and the resulting distance matrix may be an open question, it still seems possible that there at least is some approximation to this optimal solution. If I don't get an answer here, I'll mail the gentleman who posed that problem and ask whether he knows of any approximation algorithm (and post any answer I get to that here).

    Read the article

  • Browser privacy improvement implications for websites

    - by phq
    On https://panopticlick.eff.org/ EFF let you test the number of uniquely identifying bits that the browser gives a website. Among these are HTTP header fields such as User-Agent, Accept, Accept-Language and later perhaps ETAG and If-Modified-Since. Also there is a lot of Information that javascript can get from the browser such as time-zone, screen resolution, complete list of fonts and plugins available. My first impression is, is all this information really usable/used on a majority of all websites? For example, how many sites does really send different content-types depending on the http accept header, or what fonts are available(I thought css had taken care of this)? Let's say of these headers/js functionality on day would be gone. Which ones would; never be noticed they were gone? impact user experience? impact server performance? immediately reimplemented because the Internet cannot work without it? Extra credit for differentiating between what can be done, what should be done and what is done in most situations.

    Read the article

  • ArchBeat Link-o-Rama for 10-19-2012

    - by Bob Rhubart
    One Week to Go: OTN Architect Day Los Angeles - Oct 25 Oracle Technology Network Architect Day in Los Angeles happens in one week. Register now to make sure you don't miss out on a rich schedule of expert technical sessions and peer interaction covering the use of Oracle technologies in cloud computing, SOA, and more. Even better: it's all free. Register now! When: October 25, 2012, 8:30am - 5:00pm. Where: Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Moving your APEX app to the Oracle Cloud | Dimitri Gielis Oracle ACE Director (and OSN Developer Challenge co-winner) Dimitri Gielis shares the steps in the process as he moves his "DGTournament" application, along with all of its data, onto the Oracle Cloud. A brief note for customers running SOA Suite on AIX platforms | A-Team - SOA "When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks," says Christian, an architect on the Oracle Fusion Middleware A-Team. "On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters." Introducing the New Face of Fusion Applications | Misha Vaughan Oracle ACE Directors Debra Lilly and Floyd Teter have already blogged about the the new face of Oracle Fusion Applications. Now Applications User Experience Architect Misha Vaughan shares a brief overview of how the Oracle Applications User Experience (UX) team developed the new look. ADF Essentials Security Implementation for Glassfish Deployment | Andrejus Baranovskis According to Oracle ACE Director Andrejus Baranovskis, Oracle ADF Essentials includes all the key ADF technologies, save one: ADF Security. In this post he illustrates a solution for filling that gap. Thought for the Day "Why are video games so much better designed than office software? Because people who design video games love to play video games. People who design office software look forward to doing something else on the weekend." — Ted Nelson Source: softwarequotes.com

    Read the article

  • Refactoring and Open / Closed principle

    - by Giorgio
    I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the Open Closed Principle: each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that do not influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface I and a corresponding implementation class A. When class A has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation B, and keep using A as long as B is not mature. When B is mature, all that is needed is to change how I is instantiated. If the new requirements suggest a change to the interface as well, I define a new interface I' and a new implementation A'. So I, A are frozen and remain the implementation for the production system as long as I' and A' are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of complex refactorings, "... because it is not possible to write code directly in its final form." Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class A, but when that class has been tested successfully it should be frozen?

    Read the article

  • What does "fully supported" mean in context of Radeon Opensource Video Driver?

    - by stevecoh1
    UPDATE: This is not a request for support of my specific issue. Details of that issue are here: How to recover from bad upgrade to 13.04 (Unity very slow) . I have "solved" that issue, for the time being anyway, by loading alternative lighter weight desktops. This question was opened specifically to question the meaning of the documentation at https://help.ubuntu.com/community/RadeonDriver . END OF UPDATE There it is, in Black and White: https://help.ubuntu.com/community/RadeonDriver Fully Supported All these Radeon(HD) cards and derivatives have good 3D acceleration support. This is not an exhaustive list: ... RV610/RV630 Radeon HD 2400/2600/2700/4200/4225/4250 Yet in my case (the HD2400) this proves to be manifestly untrue, at least if "Fully Supported" means sufficient to run Unity in Ubuntu 13.04. It runs all the applications I can launch under Unity, but Unity itself is unbearably slow. It's quite striking really. Click on the "Dash" - go get a cup of coffee. Type a key in the Unity search box, wait five seconds for it to appear. Type Alt-tab and wait five seconds for the screen to finish painting. None of these issues appear outside of Unity components. As you all know, there are complaints about slow performance all over the Internet about Unity. Shouldn't this page somehow address this issue? Especially if "fully supported" doesn't mean sufficiently to run the default modern Ubuntu release. What does "fully supported" mean?

    Read the article

  • SharePoint 2010 Video Training

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Yes, the DVD is finally available. This is an exhaustive 14 hour video course that Carl and I recorded back in April. It is an end-to-end overview of SharePoint 2010. You can view more details including ordering information about the DVD here. And if you’re interested, a SharePoint 2007 video training version is also available. Carl and I worked quite hard on putting these together, so we hope you enjoy these. Detailed Table of Contents: Introduction (13:49) 30,000 Foot Overview (42:07) Application Management (43:35) User Experience (16:00) Writing Code Part 1 (1:07:49) Writing Code Part 2 (34:41) Simple Web Parts (14:01) Visual Web Parts (6:35) Pages (35:02) Putting it All Together (29:13) Client Side Technology (49:19) ADO.NET Data Services (51:29) Custom Data Services (43:30) Managing Data (29:02) Managing Data: Content Types (17:11) Managing Data: Events (19:22) Managing Data: List Scalability (35:51) Managing Data: Querying (20:07) Enterprise Content Management: DocumentIDs and Document Sets (16:44) Enterprise Content Management: Metadata Infrastructure (22:13) Enterprise Content Management: Record Management (26:27) Enterprise Content Management: Content Organizer (7:21) Enterprise Content Management: Enterprise Content Types (11:21) Business Connectivity Services (BCS) in the SharePoint Designer (26:09) BCS in Visual Studio (9:57) Workflows in the SharePoint Designer (22:07) Workflows in Visual Studio (19:01) Business Intelligence (21:14) Excel (15:25) Performance Point (24:37) Security: Claims-Based Authentication (27:13) Security: Secure Store Service (11:04) Security: The SharePoint Object Model (11:16) Comment on the article ....

    Read the article

  • Using Behavior Trees and Events together

    - by weichsem
    I am beginning to work with behavior trees and am unsure how events should be handled within the tree. Lets say we have a space game where the player is dogfighting with a handful of other ships, some friendly some not. The player destroys a ship and the rest of the hostile ships should then start to retreat. How was should the shipWasDestroyed event effect the other ship's behavior trees so that they start running the retreat behavior? One way I could think of doing this is have all the conditions I care about be high level nodes that effectively state change the ship. This would mean I'd have to check all of these state change conditions on every frame the behavior tree was run, even if they are very rare occurrences. I'd prefer not doing this for performance and complexity reasons. From looking at the Halo papers on behavior trees it seems that they handled this by dynamically placing nodes into the tree when the event occurred. It seems like calculating where the new node should go could be problematic depending on the current state of the running behavior. How is this normally handled?

    Read the article

  • Updating an ADF Web Service Data Control When Service Structure or Location Change

    - by Shay Shmeltzer
    The web service data control in Oracle ADF gives you a simplified approach to consuming services in ADF applications, and now with ADF Mobile the usage of this service seems to be growing. A frequent question we get is what happens if the service that I'm consuming changes - how do I update my data control? Well, first we should mention that if you do a good design of your application before you actually code - then things like Web service method signature shouldn't change. The signature is the contract between the publisher and the consumer, and contracts shouldn't be broken. But in reality things do change during development stages, so here is how you can update both method signatures and service location with the Web service data control: After watching this video you might be tempted to not copy the WSDLs to your project - which lets you use the right click update on a data control. However there is a reason why the copy is on by default, it reduces network traffic when you are actually running your application since ADF doesn't need to go to the server to find out the service structure. So for runtime performance, you probably should keep the WSDL local.  I encourage you to further look into both the connections.xml file where your service location is saved, and the datacontrols.dcx file where its definition is kept to get an even deeper understanding of how ADF works underneath the declarative layers.

    Read the article

  • Free Developer Day - Hands-on Oracle 11g Applications Development

    - by [email protected]
    Spend a day with us learning the key tools, frameworks, techniques, and best practices for building database-backed applications. Gain hands-on experience developing database-backed applications with innovative and performance-enhancing methods. Meet, learn from, and network with Oracle database application development experts and your peers. Get a chance to win a Flip video camera and Oracle prizes, and enjoy post-event benefits such as advanced lab content downloads.Bring your own laptop (Windows, Linux, or Mac with minimum 2Gb RAM) and take away scripts, labs, and applications*.Space is limited. "Register Now"  for this FREE event. Don't miss your exclusive opportunity to meet with Oracle application development & database experts, win Oracle Trainings, and discuss today's most vital application development topics.          Win two Oracle Trainings valued in $2500 each. Offered by SDT Learning Corp·         Oracle Application Express: Developing Web Applications (duración de 4 días)·         Oracle Fusion Middleware 11g: Java Programming Ed 1.1 (duración de 5 días)You can also be registered Calling to Jamielle Gandía at 787-999-3187Requirements by TrackFor .Net Track1) A windows machine with 2 GB memory2) Attendees must in advance of the show, download and install VMWare player:       http://www.vmware.com/products/player/3) Attendees should test their machine to make sure they can run an executable on an external USB hard drive (some corporate machines are locked down so they cannot do this)For Java TrackYou will save time if you install these applications in advance:1) A windows machine with 2 GB memory2) VirtualBox must be installed in each laptopWhat is virtual box? Where can I download it?For APEX Track1) A windows machine with 2 GB memoryOracle Corporate agenda @  HereNote:  (Limited to 50 people per Track)

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • The latest Oracle Social Network News from Open World

    - by me
    Highlights Oracle and Partners showcase the latest development around  Oracle Social Network  (OSN) Integration of OSN Social Fabric into Business Applications like Finance, HCM and Customer Experience Partners like Cisco WebEx, Avaya, Weemo, Lingotek and HarQen showcase OSN integration Oracle shares details around internal OSN deployment Please visit us at 2413 Moscone South  Exhibition Hall  and  experience a live OSN demo Social Fabric  Oracle Social Network socializes your Applications, Process and Content within your Enterprise. Here are some examples what is shown at Oracle Open World. Socialize the Finance department Enable Finance departments to collaborate instantly during quarter close with real-time information access Enable finance professionals in the back office to easily interact with the rest of the company Provide privacy when discussing sensitive financial results within Conversations  Socialize Human Capital Management (HCM) Promotes attainable performance goals that achieve the business objectives of the enterprise Capture expertise across the network Continuous feedback loop provided that results in productivity and innovation improvement tied to higher employee engagement OSN and Customer Experience Find the person with the best skills to assist with the issue Real-time collaboration in  context of the issue Track an Agent’s collaboration contributions Identify and contribute relevant knowledge back to the system Cisco/Webex integration The Web Conferencing tool of your choice can be integrated with OSN. In the example below you can see the integration of the Cisco WebEx solution into OSN. and sure - this works on mobile devices as well  OSN @ Oracle Oracle has deployed OSN as part of the internal Fusion CRM application rollout. After just 4 month we can see impressive usage patterns.

    Read the article

  • Team Foundation Service Preview now open for all!

    - by Tarun Arora
    The concept of TFS in the cloud was first presented back in early 2010, the product team worked hard to preview a constantly evolving solution at the BUILD conference last year and after having completed 31 Sprints today the preview service has been opened for all. No more invitation codes required, TfsPreview has been made public! “Since we announced the Team Foundation Service Preview at the BUILD conference last year, we’ve limited the on boarding of new customers by requiring invitation codes to create accounts.  The main reason for this has been to control the growth of the service to make sure it didn’t run away from us and end up with a bad user experience.  In this time period, we’ve continued to work on our infrastructure, performance, scale, monitoring, management and, of course, some cool new features like cloud build. ”   - Brian Harry Since the service is still in preview, it is free for all… If you haven’t, now is the best time to try out the offering. There is no fixed time line on how long before service becomes chargeable but the terms of service support production use, the service is reliable and the product team committed to carry all of your data forward into production. “The service will remain in “preview” for a while longer while we work through additional features like data portability, commercial terms, etc but the terms of service support production use, the service is reliable and we expect to carry all of your data forward into production. ”  - Brian Harry As of today it’s possible to use TFS Preview with VS 2012 RC, VS 2010 SP1, VS 2008 SP1, the service currently does not work with VS 2005, this is something the product team is actively working on. You can refer to Brian’s announcement blog post here, http://blogs.msdn.com/b/bharry/archive/2012/06/11/team-foundation-service-preview-is-public.aspx

    Read the article

  • Software Design Idea for multi tier architecture

    - by Preyash
    I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components DataTier (Contains EF POCO objects) DomainModel (Contains Domain related objects) Global (Among other common things it contains Repository objects for CRUD to DB) Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,

    Read the article

  • ArchBeat Link-o-Rama for November 29, 2012

    - by Bob Rhubart
    Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal | Murali Veligeti A detailed technical post with supporting downloads from Murali Veligeti. Business SOA: When to shout, the art of constructive destruction Communication skills are essential for architects. Sometimes that means raising your voice. Steve Jones shares some tips for effective communication when the time comes to let it all out. Centralized Transaction Management for ADF Data Control | Andrejus Baranovskis Oracle ACE Director and prolific blogger Andrejus Baranovskis shares instructions and a sample application to illustrate how to implement centralized Commit/Rollback management in an ADF application. Collaborative Police across multiple stakeholders and jurisdictions | Joop Koster Capgemini Oracle Solution Architect Joop Koster raises some interesting IT issues regarding the challenges facing international law enforcement. Architected Systems: "If you don't develop an architecture, you will get one anyway…" "Can you build a system without taking care of architecture?" asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Thought for the Day "Good judgment comes from experience, and experience comes from bad judgment. " — Frederick P. Brooks Source: Quotes for Software Engineers

    Read the article

  • Hybrid Graphics on Ubuntu 12.04 switching to discrete

    - by cfstras
    I have a Sony Vaio VPCCB-27FX with hybrid graphics. Using vgaswitcheroo enables me to switch my discrete card off to save power. Now when i want to switch to the discrete card for performance, my system freezes. I already tried logging out and killing x with service lightdm stop, but still, it freezes as soon as I echo DIS > switch. typing blindly, echo IGD > switch returns me to my console where it reads [ 179.555171] i915: switched off, but it seems the discrete card never gets switched on... running echo DDIS > switch gives me the following: [540....] [drm:atop_op_jump] *ERROR* atombios stuck in loop for more than 5secs aborting [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing CEE2 (len 62, WS 0, PS 0) @ 0xCEFE [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BBF6 (len 1036, WS 4, PS 0) @ 0xBCF3 [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BB8C (len 76, WS 0, PS 0) @ 0xBB94 [541....] [drm:r600_RING_TEST] *ERROR* radeon: ring test failed (scratch(0x8504)=0xFFFFFFFF) [541....] [drm:evergreen_resume] *ERROR* evergreen startup failed on resume after that, the atombios part repeats a few times. also, the terminal locks up again and sysrq+REISUB is my only rescue. Has anybody an idea how I can switch to my discrete card without the system locking up? #uname -srvmpio Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux #lsb_release -r Description: Ubuntu 12.04 LTS

    Read the article

  • Experiences with Ubuntu One?

    - by rsuarez
    I'm currently testing several backup/sync services: Dropbox, SpiderOak and Ubuntu One. Of them, Dropbox is the winner hands down; SpiderOak is nice too, but a bit more intrusive and unpredictable (sometimes is slow in syncing some files, or doesn't sync them at all). Ubuntu One has promise, but I've used it much less than the other two. I'm thinking about buying a "20 pack" and using Ubuntu One as my only synchronization software. It's the cheapest of them all (3$/month vs 10$ in Dropbox and SpiderOak), and 20GB of space is enough for me. My intention is to sync most of my $HOME folder. All the computers I'll connect will have Ubuntu installed, so not being multiplatform doesn't really matter to me. If its performance is as good as Dropbox's, I'm sold. But I'd like to gauge some experiences here first. Is anyone using it seriously? I.e., to sync a lot of files that change often (like the aforementioned $HOME folder, program sources, or something alike). What have been your experiences? Thanks in advance.

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Figure: Create a new folder Give the local TFS WPG group full control of the directory   Figure: You need to use the App Tier Service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). Figure: The web.config for TFS is stored in the application folder <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> Figure: Adding this to the web.config will trigger a restart of the app pool Figure: Your web.config should look something like this The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Give the local TFS WPG group full control of the directory Figure: You need to use the App Tier service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • Augmenting functionality of subclasses without code duplication in C++

    - by Rob W
    I have to add common functionality to some classes that share the same superclass, preferably without bloating the superclass. The simplified inheritance chain looks like this: Element -> HTMLElement -> HTMLAnchorElement Element -> SVGElement -> SVGAlement The default doSomething() method on Element is no-op by default, but there are some subclasses that need an actual implementation that requires some extra overridden methods and instance members. I cannot put a full implementation of doSomething() in Element because 1) it is only relevant for some of the subclasses, 2) its implementation has a performance impact and 3) it depends on a method that could be overridden by a class in the inheritance chain between the superclass and a subclass, e.g. SVGElement in my example. Especially because of the third point, I wanted to solve the problem using a template class, as follows (it is a kind of decorator for classes): struct Element { virtual void doSomething() {} }; // T should be an instance of Element template<class T> struct AugmentedElement : public T { // doSomething is expensive and uses T virtual void doSomething() override {} // Used by doSomething virtual bool shouldDoSomething() = 0; }; class SVGElement : public Element { /* ... */ }; class SVGAElement : public AugmentedElement<SVGElement> { // some non-trivial check bool shouldDoSomething() { /* ... */ return true; } }; // Similarly for HTMLAElement and others I looked around (in the existing (huge) codebase and on the internet), but didn't find any similar code snippets, let alone an evaluation of the effectiveness and pitfalls of this approach. Is my design the right way to go, or is there a better way to add common functionality to some subclasses of a given superclass?

    Read the article

  • Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory

    - by Darin Pendergraft
    Mark your calendars and register to join this webcast featuring Steve Giovanetti from Hub City Media, Albert Wu from UCLA and our own Scott Bonnell as they discuss a directory upgrade project from Sun DSEE to Oracle Unified Directory. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Date: Thursday, September 13, 2012 Time: 10:00 AM Pacific Join us for this webcast and you will: Learn from one customer that has successfully upgraded to the new platform See what technology and business drivers influenced the upgrade Hear about the benefits of OUD’s elastic scalability and unparalleled performance Get additional information and resources for planning an upgrade Register Now!

    Read the article

  • New features in SQL Prompt 6.4

    - by Tom Crossman
    We’re pleased to announce a new beta version of SQL Prompt. We’ve been trying out a few new core technologies, and used them to add features and bug fixes suggested by users on the SQL Prompt forum and suggestions forum. You can download the SQL Prompt 6.4 beta here (zip file). Let us know what you think! New features Execute current statement In a query window, you can now execute the SQL statement under your cursor by pressing Shift + F5. For example, if you have a query containing two statements and your cursor is placed on the second statement: When you press Shift + F5, only the second statement is executed:   Insert semicolons You can now use SQL Prompt to automatically insert missing semicolons after each statement in a query. To insert semicolons, go to the SQL Prompt menu and click Insert Semicolons. Alternatively, hold Ctrl and press B then C. BEGIN…END block highlighting When you place your cursor over a BEGIN or END keyword, SQL Prompt now automatically highlights the matching keyword: Rename variables and aliases You can now use SQL Prompt to rename all occurrences of a variable or alias in a query. To rename a variable or alias, place your cursor over an instance of the variable or alias you want to rename and press F2: Improved loading dialog box The database loading dialog box now shows actual progress, and you can cancel loading databases:   Single suggestion improvement SQL Prompt no longer suggests keywords if the keyword has been typed and no other suggestions exist. Performance improvement SQL Prompt now has less impact on Management Studio start up time. What do you think? We want to hear your feedback about the beta. If you have any suggestions, or bugs to report, tell us on the SQL Prompt forum or our suggestions forum.

    Read the article

  • Lubuntu 12.04 on Acer laptop boots to blank blue screen

    - by WGCman
    My previous question on this was closed, but I am posting it again as the solution which my son eventually found may assist other users of the forum, or someone may be able to tweak the solution to improve the performance. Having installed Kubuntu 12.04.01 from a live USB onto my desktop, I wanted to do the same on my laptop, an Acer Aspire 1362 Laptop, which has 256MB RAM (actually 512 "on the box", but a good deal can be borrowed by the graphics!). I found Kubuntu wouldn't run on so little memory but downloaded: Lubuntu-12.04-alternate-i386.iso, which I understood was light enough to go. The laptop has one internal 40GB Toshiba hard drive divided into 3 partitions: C,19GB with Windows XP, Windows program files and some data, D, 19GB mostly data, and a small 2GB partition with some Acer software, which XP can't normally “see”. I transferred most of the contents of D to a memory stick, leaving 16GB free for Lubuntu. I did not want to dump XP yet, though it is painfully slow. I installed Lubuntu from then USB stick, accepting the default answers to most of the questions. The D: partition was further partitioned into a 500MB boot partition, 10GB for Linux, 2GB Swap and 6GB for data shareable between Linux and Windows. I had no error messages during installation, rebooted, was offered the choice of Ubuntu or XP, and selected the former. After a few minutes, I get a dark blue screen announcing Lubuntu with five dots underneath which lighten in turn. Eventually the lights stopped, and whatever I try the screen remains blank apart from “Lubuntu” I tried several solutions suggested on the forum for “identical” questions but without success.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Fixed Assets Recommended Patch Collections

    - by Cindy A B-Oracle
    After the introduction of the Recommended Patch Collections (RPCs) in late 2012, Fixed Assets development has released an RPC about every six months.  You may recall that an RPC is a collection of recommended patches consolidated into a single, downloadable patch, ready to be applied.  The RPCs are created with the following goals in mind: Stability:  Address issues that occur often and interfere with the normal completion of crucial business processes, such as period close--as observed by Oracle Development and Global Customer Support. Root Cause Fixes:  Deliver a root cause fix for data corruption issues that delay period close, normal transaction flow actions, performance, and other issues. Compact:  While bundling a large number of important corrections, the file footprint is kept as small as possible to facilitate uptake and minimize testing. Reliable:  Reliable code with multiple customer downloads and comprehensive testing by QA, Support and Proactive Support.  There has been a revision to the RPC release process for spring 2014.  Instead of releasing product-specific RPCs, development has released a 12.1.3 RPC that is EBS-wide.  This EBS RPC includes all product-recommended patches along with their dependencies. To find out more about this EBS-wide RPC, please review Oracle E-Business Suite Release 12.1.3+ Recommended Patch Collection 1 (RPC1) (Doc ID 1638535.1).

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >