Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 619/837 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • Serious error on first attempts to dual boot Ubuntu 14.04 with Win7

    - by beetle
    I downloaded Ubuntu 14.04 from the website which I saved to my desktop with WinRar. My trial with winrar had expired so I have now tried it with Active@Isoburner but I'm getting no further. I eventually got it burnt onto a DVD(4.7gb) and tried to boot from DVD and normally. Neither way works. It looks like its about to boot but then a message appears saying that a serious error has occurred...the disk drive for /tmp is not ready yet or not present...press I to ignore, s to skip or m for manual... At this point I'm lost and unsure what to do. My laptop Toshiba Equium A210-17I is over 5 or 6 years old. Available space on the Hard Drive is 24gb. 2gb RAM. It originally came with Windows Vista Home Premium edition but about a year ago or more a friend wiped it clean for me as I was having no end of problems with Vista. He installed Windows 7 Ultimate(which I don't have a disc for). How can I resolve this issue and get Ubuntu to boot up? Do I have to install a previous version of Ubuntu first? Any advice or help would be greatly appreciated. Kind regards. Beetle.

    Read the article

  • Glimpse: Open Source Web Development

    - by Elizabeth Ayer
    We’re delighted to announce that Red Gate will be backing Glimpse! For those of you who aren’t familiar with the project, Glimpse is an open source tool which does for the server what Firebug does for the client. It’s been in beta for the last year, and we’re very excited to give Glimpse the support and dedicated effort needed to take it to a v1 and beyond. Glimpse’s founders (Nik Molnar and Anthony van der Hoorn) have joined Red Gate, and they’re just as excited as we are about the opportunities that active development of Glimpse will bring. They will continue to write code, support the community and drive the project forward (as they’ve done since its inception). With full-time attention on growing Glimpse and its community, users and developers can expect the project to accelerate, with frequent releases of new functionality. Red Gate is excited about its first major involvement with open source. You may well be wondering, though, why Red Gate is doing this. Glimpse dovetails beautifully with Red Gate’s .NET tools, which makes Glimpse an ideal framework for plugging in advanced, paid-for functionality (like performance analysis) the way web developers want to see it. As a means to this end, we will contribute to the Glimpse open source project in order to broaden its adoption and delight web developers. Since bringing in .NET Reflector in 2008, we’ve learnt sharp lessons from the community about the right and wrong ways to engage with developers, not to mention the enduring value of free. Glimpse further shows what the .NET community can achieve through open source collaboration, and we’re looking forward to working with the Glimpse community to make something enduring and awesome. Nik and Anthony, themselves passionate advocates of community-driven software, will continue to control the Glimpse project, steering it to best meet the needs of its users and contributors. If you have any questions or queries about Glimpse, or Red Gate’s involvement in the project, please tweet with the #glimpse hashtag, contact us at Red Gate on [email protected], or post to the Glimpse Development Forum on Google Groups.

    Read the article

  • compile error in Ubuntu 10

    - by yozloy
    Hey guys I got a vps which run solusVM. I'm now trying to install ruby 1.9.2 in it. I follow this guide: after I run this command apt-get update apt-get -y install build-essential zlib1g zlib1g-dev libxml2 libxml2-dev libxslt-dev I got this error below root@makserver:/usr/local/src/ruby-1.9.2-p0# apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies... Done The following extra packages will be installed: libc6 Suggested packages: glibc-doc The following packages will be upgraded: libc6 1 upgraded, 0 newly installed, 0 to remove and 80 not upgraded. Need to get 0B/4252kB of archives. After this operation, 4096B disk space will be freed. Do you want to continue [Y/n]? y debconf: apt-extracttemplates failed: Bad file descriptor (Reading database ... 21594 files and directories currently installed.) Preparing to replace libc6 2.11.1-0ubuntu7.2 (using .../libc6_2.11.1-0ubuntu7.8_amd64.deb) ... open2: fork failed: Cannot allocate memory at /usr/share/perl5/Debconf/ConfModule.pm line 59 dpkg: error processing /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 12 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Anybody can tell me how can I correct this. Thanks

    Read the article

  • Organizing MVC entities communication

    - by Stefano Borini
    I have the following situation. Imagine you have a MainWindow object who is layouting two different widgets, ListWidget and DisplayWidget. ListWidget is populated with data from the disk. DisplayWidget shows the details of the selection the user performs in the ListWidget. I am planning to do the following: in MainWindow I have the following objects: ListWidget ListView ListModel ListController ListView is initialized passing the ListWidget. ListViewController is initialized passing the View and the Model. Same happens for the DisplayWidget: DisplayWidget DisplayView DisplayModel DisplayController I initialize the DisplayView with the widget, and initialize the Model with the ListController. I do this because the DisplayModel wraps the ListController to get the information about the current selection, and the data to be displayed in the DisplayView. I am very rusty with MVC, being out of UI programming since a while. Is this the expected interaction layout for having different MVC triplets communicate ? In other words, MVC focus on the interaction of three objects. How do you put this interaction as a whole into a larger context of communication with other similar entities, MVC or not ?

    Read the article

  • Find points whose pairwise distances approximate a given distance matrix

    - by Stephan Kolassa
    Problem. I have a symmetric distance matrix with entries between zero and one, like this one: D = ( 0.0 0.4 0.0 0.5 ) ( 0.4 0.0 0.2 1.0 ) ( 0.0 0.2 0.0 0.7 ) ( 0.5 1.0 0.7 0.0 ) I would like to find points in the plane that have (approximately) the pairwise distances given in D. I understand that this will usually not be possible with strictly correct distances, so I would be happy with a "good" approximation. My matrices are smallish, no more than 10x10, so performance is not an issue. Question. Does anyone know of an algorithm to do this? Background. I have sets of probability densities between which I calculate Hellinger distances, which I would like to visualize as above. Each set contains no more than 10 densities (see above), but I have a couple of hundred sets. What I did so far. I did consider posting at math.SE, but looking at what gets tagged as "geometry" there, it seems like this kind of computational geometry question would be more on-topic here. If the community thinks this should be migrated, please go ahead. This looks like a straightforward problem in computational geometry, and I would assume that anyone involved in clustering might be interested in such a visualization, but I haven't been able to google anything. One simple approach would be to randomly plonk down points and perturb them until the distance matrix is close to D, e.g., using Simulated Annealing, or run a Genetic Algorithm. I have to admit that I haven't tried that yet, hoping for a smarter way. One specific operationalization of a "good" approximation in the sense above is Problem 4 in the Open Problems section here, with k=2. Now, while finding an algorithm that is guaranteed to find the minimum l1-distance between D and the resulting distance matrix may be an open question, it still seems possible that there at least is some approximation to this optimal solution. If I don't get an answer here, I'll mail the gentleman who posed that problem and ask whether he knows of any approximation algorithm (and post any answer I get to that here).

    Read the article

  • jungledisk 3.16 doesn't launch

    - by Angelo
    Has anyone had success with jungledisk 3.16 on ubuntu 11.10? I installed it from the .deb file provided by jungledisk. The install goes fine, but I can't get the "jungle disk desktop" app to launch. It appears in the dash search bar, but doesn't launch or do anything upon selecting it. When I try the command line, I get the following... me@myComputer:~$ jungledisk -V -f Verbose mode enabled Shutting down... me@myComputer:~$ What's the deal here? Anybody else experience this? Does anyone have suggestions for what to try? I opened up a help-ticket with jungledisk, but they just asked me for which ubuntu version and which gui I was using and then went silent. I've used jungledisk since 2008 and had no problems. It is sad that it is not working on the new ubuntu for me. Should I just quit them and use dropbox or one? (those seem to be working)

    Read the article

  • Technical Integration Roadmap for OBI11g and Oracle Hyperion EPM System

    - by Mike.Hallett(at)Oracle-BI&EPM
    There is an excellent technical whitepaper on the integration roadmap for Oracle business intelligence enterprise edition and the Oracle Hyperion enterprise performance management system  (download at this link).  This document lists the integration points among all current releases of Oracle BI EE with EPM System releases: with live links to other relevant documentation also provided. You may also be interested in the overall Hyperion EPM System Documentation Resources which can be found from the Doc Portal. And, there are two new tools for EPM @ MyOracleSupport  {this needs your oracle logon} : Cumulative Feature Overview Tool This new tool offers a simple way to determine the features developed between releases to assist you in your upgrade implementations. The tool helps you to plan your upgrades by providing concise descriptions of new and enhanced solutions and functionality that are added between your current and target releases. With the Cumulative Feature Overview Tool, you can quickly and easily find information about new features for each EPM System product. Defects Fixed Finder Tool This new tool provides an efficient way to review the defects fixed in patch set updates, patch set exceptions, and patch sets for major releases, starting with Release 11.1.1. The tool helps you plan patch implementations by providing concise descriptions of defects fixed after your current release. The Defects Fixed Finder enables you to easily find information about defects fixed for each EPM System product.

    Read the article

  • SharePoint 2010 Video Training

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Yes, the DVD is finally available. This is an exhaustive 14 hour video course that Carl and I recorded back in April. It is an end-to-end overview of SharePoint 2010. You can view more details including ordering information about the DVD here. And if you’re interested, a SharePoint 2007 video training version is also available. Carl and I worked quite hard on putting these together, so we hope you enjoy these. Detailed Table of Contents: Introduction (13:49) 30,000 Foot Overview (42:07) Application Management (43:35) User Experience (16:00) Writing Code Part 1 (1:07:49) Writing Code Part 2 (34:41) Simple Web Parts (14:01) Visual Web Parts (6:35) Pages (35:02) Putting it All Together (29:13) Client Side Technology (49:19) ADO.NET Data Services (51:29) Custom Data Services (43:30) Managing Data (29:02) Managing Data: Content Types (17:11) Managing Data: Events (19:22) Managing Data: List Scalability (35:51) Managing Data: Querying (20:07) Enterprise Content Management: DocumentIDs and Document Sets (16:44) Enterprise Content Management: Metadata Infrastructure (22:13) Enterprise Content Management: Record Management (26:27) Enterprise Content Management: Content Organizer (7:21) Enterprise Content Management: Enterprise Content Types (11:21) Business Connectivity Services (BCS) in the SharePoint Designer (26:09) BCS in Visual Studio (9:57) Workflows in the SharePoint Designer (22:07) Workflows in Visual Studio (19:01) Business Intelligence (21:14) Excel (15:25) Performance Point (24:37) Security: Claims-Based Authentication (27:13) Security: Secure Store Service (11:04) Security: The SharePoint Object Model (11:16) Comment on the article ....

    Read the article

  • ArchBeat Link-o-Rama for 10-19-2012

    - by Bob Rhubart
    One Week to Go: OTN Architect Day Los Angeles - Oct 25 Oracle Technology Network Architect Day in Los Angeles happens in one week. Register now to make sure you don't miss out on a rich schedule of expert technical sessions and peer interaction covering the use of Oracle technologies in cloud computing, SOA, and more. Even better: it's all free. Register now! When: October 25, 2012, 8:30am - 5:00pm. Where: Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Moving your APEX app to the Oracle Cloud | Dimitri Gielis Oracle ACE Director (and OSN Developer Challenge co-winner) Dimitri Gielis shares the steps in the process as he moves his "DGTournament" application, along with all of its data, onto the Oracle Cloud. A brief note for customers running SOA Suite on AIX platforms | A-Team - SOA "When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks," says Christian, an architect on the Oracle Fusion Middleware A-Team. "On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters." Introducing the New Face of Fusion Applications | Misha Vaughan Oracle ACE Directors Debra Lilly and Floyd Teter have already blogged about the the new face of Oracle Fusion Applications. Now Applications User Experience Architect Misha Vaughan shares a brief overview of how the Oracle Applications User Experience (UX) team developed the new look. ADF Essentials Security Implementation for Glassfish Deployment | Andrejus Baranovskis According to Oracle ACE Director Andrejus Baranovskis, Oracle ADF Essentials includes all the key ADF technologies, save one: ADF Security. In this post he illustrates a solution for filling that gap. Thought for the Day "Why are video games so much better designed than office software? Because people who design video games love to play video games. People who design office software look forward to doing something else on the weekend." — Ted Nelson Source: softwarequotes.com

    Read the article

  • Server 2008R2 in Extra Small Windows Azure Instance?

    - by Shawn Eary
    Windows Azure hosting for an Extra Small (XS) Windows VM seems to come out to be about $10 a month right now. I think this XS instance gives you the equivalent of a 1 GHZ CPU with 768MB of RAM. I think the minimum requirements for Server 2008 is 1GHZ CPU with 512MB of RAM. Also, I think the minimum requirements for SQL Server Express is 1GHZ CPU with 256 MB of RAM and that the minimum requirements for Team Foundation Server Express 11 Beta is 2.2 GHZ CPU with 1 Gig of RAM (this 2.2 GHZ part could be a problem for my 1 GHZ XS VM...). Given the performance of the XS Azure instance, would I be able to install: a very basic MVC web site; a free instance of SQL Server Express; a free single user instance of Team Foundation Server Express 11 Beta and run the XS VM instance without serious crashing? I know there are other Shared WebHost providers that can provide these features for me, but those hosting providers have the following disadvantages: They sometimes cost a lot of money after all of the "addons" are in place They probably don't provide the level of security and employee integrity that Microsoft can provide They don't provide the total control that an Azure VM seems to provide

    Read the article

  • My laptop with Linux/ Ubuntu isn't working

    - by Andy Campos
    I have a dell laptop with ubuntu linux. A day I tried to start it up and a black screen just appeared that says: GNU GRUB version1.98+20100804-5ubuntu3 (and these clickable options:) -Ubuntu, with Linux 2.6.35-22-generic -Ubuntu, with Linux 2.6.35-22-generic (recovery mode) -Memory test (memtest86+) -Memory test (memtest86+, serial console 115200) When I click the first one, a bunch of text appears like: mount: mounting /dev/disk/by-uuid/8396a225... failed: invalid argument mount: mounting /dev on /root/dev failed: no such file or directory mount: mounting /sys on /root/sys failed: no such file or directory mount: mounting /proc on /root/proc failed: no such file or directory Target file system doesn't have requested /sbin/init No init found. Try passing init= bootarg Enter 'help' for a list of built-in commands BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell (ash) (initramfs) When I enter 'help' a bunch more incomprehensible text appears. Whenever I press the enter key all that pops up is (intetramfs) If anyone can make rhyme or reason out of this please, please help me out so it can boot up normally and i can be set. If there's some kind of special code I have to type in or something I know nothing about computers.

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • Ubuntu 12.04 upgrade and thunderbird

    - by Dcm1405
    After applying the suggested updates (179) an error message at the very end of the process suggested me to run apt-get install -f. Since it is a fairly new Ubuntu install (x86) I didn't setup anything in Thunderbird yet. Different error messages (see details) were generated with the -f process: ~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: thunderbird Suggested packages: latex-xft-fonts The following packages will be upgraded: thunderbird 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/20.8 MB of archives. After this operation, 594 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 170457 files and directories currently installed.) Preparing to replace thunderbird 11.0.1+build1-0ubuntu2 (using .../thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb) ... Unpacking replacement thunderbird ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: invalid code lengths set' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives /thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/thunderbird/libxul.so' Errors were encountered while processing: /var/cache/apt/archives/thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Reconstruct a file from a TCP stream

    - by Abhishek Chanda
    I have a client and a server and a third box which sees all packets from the server to the client (but not the other way around). Now when the client requests a file from the server (over HTTP), the third box sees the response. I am trying to reconstruct the file there. I am using libpcap to capture TCP datagrams and trying to reconstruct the file there. Here is what I did Listen for packets on an interface Group all packets which have the same ACK number Sort the group based on SEQ number Extract data from each packet and combine them and write to the disk The problem is, the file thus generated is not exactly the same as the original file. Does everything sound correct here? Some more details: I am using C++ The packet data is being stored as std::vector<char> I did change the byte order while reading the ack number and seq number from the packet using ntohl I am not sure if I need to change the byte order for the data as well. I tried to reverse the data from each packet before combining them, even that did not work. Is there something I am missing?

    Read the article

  • Trouble installing Pokerstars on a Live USB without Persistence through WINE

    - by Ricky Foster
    I need to install any form of Texas Hold Em' on a Lubuntu Live USB that doesn't have persistence. I was able to download PokerStars.net by emulating the .exe (a windows type file) using WINE for Linux (Lubuntu). But, when I try to install, I have no room. The only place on the Live USB is in the root folder which is set to read-only. Is there any way I can change the read only properties of the Live USB while it's in use? So, to recap. I am running Lubuntu 13.04 and can't start in Persistent mode. When I start normally everything worked fine. I proceeded to Chromium and successfully downloaded Wine and the Pokerstars.exe. I right clicked the downloaded fiel then clicked Wine, the installer loaded fine. There are about 8 different disk icons and only the one containing system files is active. Is there any way I can use the terminal to install it to Root. Thanks in advance for your answer/alternate method (without having to buy another USB to install it to).

    Read the article

  • What are the common mistakes in 'tailored Scrum approaches'?

    - by Clark Gable
    I have seen this before. Management wants to be agile and be scrummified, but does not want to step out of the status quo. My latest observation is no different; here, the Scrum is 'tailored' to the organization; specifically into a weird many-people-process. The diagram showing the different participants. I am putting together a document listing why this will not work. Here are the obvious ones: 1. There are product owner agents (an obvious WTF), who report to the product owner: causing dilution of decision making capability 2. There is a role that looks similar to a manager in the traditional approach - development manager: an obvious attempt at command-and-control model 3. The ScrumMaster's role includes collecting timesheets, which are used to track progress instead of burndown charts: detrimental to agile's efforts to build teams with motivated individuals Leaving the question "how would you convince the management?", my question is more at, "what else do you see as failures in this/similar 'tailored Scrum approaches'? EDIT: The diagram might use a few more details 1. The development manager is not part of the development team, with not very clearly defined responsibilities, except: developer performance assessemnt, recruitment, etc., 2. There are more than two teams (with ScrumMaster+development manager+dev team) with the same product owner for all teams!

    Read the article

  • Set up Work Manager Shutdown Trigger in WebLogic Server 10.3.4 Using WLST

    - by adejuanc
    WebLogic Server's Work Managers provide a way to control work and allocated threads. You can set different scheduling guidelines for different applications, depending on your requirements. There is a default self-tuning Work Manager, but you might want to set up a custom work manager in some circumstances: for example, when you want the server to prioritize one application over another when a response time goal is required, or when a minimum thread constraint is needed to avoid deadlock. The Work Manager Shutdown Trigger is a tool to help with stuck threads in which will do the following: Shut down the Work Manager. Move the application to Admin State (not active). Change the Server instance health state to failed. Example of a Shutdown Trigger set on the config.xml for your domain: <work-manager>   <name>stuckthread_workmanager</name>   <work-manager-shutdown-trigger>     <max-stuck-thread-time>30</max-stuck-thread-time>     <stuck-thread-count>2</stuck-thread-count>   </work-manager-shutdown-trigger> </work-manager> Understand that any misconfiguration on the Work Manager can lead to poor performance on the server. Any changes must be done and tested before going to production. How can one create a WorkManagerShutdownTrigger for WLS 10.3.4 using WLST? You should be able to create a WorkManagerShutdownTrigger using WLST by following these steps: edit() startEdit() cd('/SelfTuning/mydomain/WorkManagers') create('myWM','WorkManager') cd('myWM/WorkManagerShutdownTrigger') create('myWMst','WorkManagerShutdownTrigger') cd('myWMst') ls()

    Read the article

  • Updating an ADF Web Service Data Control When Service Structure or Location Change

    - by Shay Shmeltzer
    The web service data control in Oracle ADF gives you a simplified approach to consuming services in ADF applications, and now with ADF Mobile the usage of this service seems to be growing. A frequent question we get is what happens if the service that I'm consuming changes - how do I update my data control? Well, first we should mention that if you do a good design of your application before you actually code - then things like Web service method signature shouldn't change. The signature is the contract between the publisher and the consumer, and contracts shouldn't be broken. But in reality things do change during development stages, so here is how you can update both method signatures and service location with the Web service data control: After watching this video you might be tempted to not copy the WSDLs to your project - which lets you use the right click update on a data control. However there is a reason why the copy is on by default, it reduces network traffic when you are actually running your application since ADF doesn't need to go to the server to find out the service structure. So for runtime performance, you probably should keep the WSDL local.  I encourage you to further look into both the connections.xml file where your service location is saved, and the datacontrols.dcx file where its definition is kept to get an even deeper understanding of how ADF works underneath the declarative layers.

    Read the article

  • Browser privacy improvement implications for websites

    - by phq
    On https://panopticlick.eff.org/ EFF let you test the number of uniquely identifying bits that the browser gives a website. Among these are HTTP header fields such as User-Agent, Accept, Accept-Language and later perhaps ETAG and If-Modified-Since. Also there is a lot of Information that javascript can get from the browser such as time-zone, screen resolution, complete list of fonts and plugins available. My first impression is, is all this information really usable/used on a majority of all websites? For example, how many sites does really send different content-types depending on the http accept header, or what fonts are available(I thought css had taken care of this)? Let's say of these headers/js functionality on day would be gone. Which ones would; never be noticed they were gone? impact user experience? impact server performance? immediately reimplemented because the Internet cannot work without it? Extra credit for differentiating between what can be done, what should be done and what is done in most situations.

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ArchBeat Link-o-Rama for November 29, 2012

    - by Bob Rhubart
    Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal | Murali Veligeti A detailed technical post with supporting downloads from Murali Veligeti. Business SOA: When to shout, the art of constructive destruction Communication skills are essential for architects. Sometimes that means raising your voice. Steve Jones shares some tips for effective communication when the time comes to let it all out. Centralized Transaction Management for ADF Data Control | Andrejus Baranovskis Oracle ACE Director and prolific blogger Andrejus Baranovskis shares instructions and a sample application to illustrate how to implement centralized Commit/Rollback management in an ADF application. Collaborative Police across multiple stakeholders and jurisdictions | Joop Koster Capgemini Oracle Solution Architect Joop Koster raises some interesting IT issues regarding the challenges facing international law enforcement. Architected Systems: "If you don't develop an architecture, you will get one anyway…" "Can you build a system without taking care of architecture?" asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Thought for the Day "Good judgment comes from experience, and experience comes from bad judgment. " — Frederick P. Brooks Source: Quotes for Software Engineers

    Read the article

  • Refactoring and Open / Closed principle

    - by Giorgio
    I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the Open Closed Principle: each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that do not influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface I and a corresponding implementation class A. When class A has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation B, and keep using A as long as B is not mature. When B is mature, all that is needed is to change how I is instantiated. If the new requirements suggest a change to the interface as well, I define a new interface I' and a new implementation A'. So I, A are frozen and remain the implementation for the production system as long as I' and A' are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of complex refactorings, "... because it is not possible to write code directly in its final form." Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class A, but when that class has been tested successfully it should be frozen?

    Read the article

  • Experiences with Ubuntu One?

    - by rsuarez
    I'm currently testing several backup/sync services: Dropbox, SpiderOak and Ubuntu One. Of them, Dropbox is the winner hands down; SpiderOak is nice too, but a bit more intrusive and unpredictable (sometimes is slow in syncing some files, or doesn't sync them at all). Ubuntu One has promise, but I've used it much less than the other two. I'm thinking about buying a "20 pack" and using Ubuntu One as my only synchronization software. It's the cheapest of them all (3$/month vs 10$ in Dropbox and SpiderOak), and 20GB of space is enough for me. My intention is to sync most of my $HOME folder. All the computers I'll connect will have Ubuntu installed, so not being multiplatform doesn't really matter to me. If its performance is as good as Dropbox's, I'm sold. But I'd like to gauge some experiences here first. Is anyone using it seriously? I.e., to sync a lot of files that change often (like the aforementioned $HOME folder, program sources, or something alike). What have been your experiences? Thanks in advance.

    Read the article

  • Team Foundation Service Preview now open for all!

    - by Tarun Arora
    The concept of TFS in the cloud was first presented back in early 2010, the product team worked hard to preview a constantly evolving solution at the BUILD conference last year and after having completed 31 Sprints today the preview service has been opened for all. No more invitation codes required, TfsPreview has been made public! “Since we announced the Team Foundation Service Preview at the BUILD conference last year, we’ve limited the on boarding of new customers by requiring invitation codes to create accounts.  The main reason for this has been to control the growth of the service to make sure it didn’t run away from us and end up with a bad user experience.  In this time period, we’ve continued to work on our infrastructure, performance, scale, monitoring, management and, of course, some cool new features like cloud build. ”   - Brian Harry Since the service is still in preview, it is free for all… If you haven’t, now is the best time to try out the offering. There is no fixed time line on how long before service becomes chargeable but the terms of service support production use, the service is reliable and the product team committed to carry all of your data forward into production. “The service will remain in “preview” for a while longer while we work through additional features like data portability, commercial terms, etc but the terms of service support production use, the service is reliable and we expect to carry all of your data forward into production. ”  - Brian Harry As of today it’s possible to use TFS Preview with VS 2012 RC, VS 2010 SP1, VS 2008 SP1, the service currently does not work with VS 2005, this is something the product team is actively working on. You can refer to Brian’s announcement blog post here, http://blogs.msdn.com/b/bharry/archive/2012/06/11/team-foundation-service-preview-is-public.aspx

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >