Search Results

Search found 25797 results on 1032 pages for 'source formatting'.

Page 649/1032 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • Introduction to Oracle ADF

    - by Arda Eralp
    The Oracle Application Development Framework (Oracle ADF) is an end-to-end application framework that builds on Java Platform, Enterprise Edition (Java EE) standards and open-source technologies. You can use Oracle ADF to implement enterprise solutions that search, display, create, modify, and validate data using web, wireless, desktop, or web services interfaces. Because of its declarative nature, Oracle ADF simplifies and accelerates development by allowing users to focus on the logic of application creation rather than coding details. Used in tandem, Oracle JDeveloper 11g and Oracle ADF give you an environment that covers the full development lifecycle from design to deployment, with drag-and-drop data binding, visual UI design, and team development features built in. In line with community best practices, applications you build using the Fusion web technology stack achieve a clean separation of business logic, page navigation, and user interface by adhering to a model-view-controller architecture. MVC architecture: The model layer represents the data values related to the current page The view layer contains the UI pages used to view or modify that data The controller layer processes user input and determines page navigation The business service layer handles data access and encapsulates business logic Each ADF module fits in the Fusion web application architecture. The core module in the framework is ADF Model, a data binding facility. The ADF Model layer enables a unified approach to bind any user interface to any business service, without the need to write code. The other modules that make up a Fusion web application technology stack are: ADF Business Components, which simplifies building business services. ADF Faces rich client, which offers a rich library of AJAX-enabled UI components for web applications built with JavaServer Faces (JSF). ADF Controller, which integrates JSF with ADF Model. The ADF Controller extends the standard JSF controller by providing additional functionality, such as reusable task flows that pass control not only between JSF pages, but also between other activities, for instance method calls or other task flows.

    Read the article

  • How do I copy a package from Debian to my PPA?

    - by Bernhard Reiter
    I'd like to add the latest gourmet package from Debian sid to our team's PPA so Ubuntu users who would like to run an up-to-date version of Gourmet can add that PPA to their software sources. (Dependency-wise, that shouldn't be much of an issue as pretty much all our current dependencies are already available in all currently supported Ubuntu versions.) I've downloaded the *.dsc file and debian and orig tarballs, and even figured out I could use this for the package's source.changes file. I also downloaded the Debian maintainer's public key so dput can validate the package. I then tried to upload the package to our PPA using dput ppa:~gourmet/ppa gourmet_0.17.3-1_source.changes (I also tried without the tilda.) This seemed to succeed, but I didn't get a confirmation email, and no packages are now displayed at our PPA, which leads me to believe that the package was rejected because the Debian maintainer's key is obviously not among our team members' keys. So what's the easiest way to "copy" a package from Debian (sid) to a Launchpad PPA? Do I really need to rebuild the entire package locally before I can upload it?

    Read the article

  • Solaris 10 opencsw git package issue with bitbucket git hosting

    - by zephyrus00jp
    Has anyone tried using `git' from opencsw package in order to work with bitbucket source hosting service (under solaris10)? I tried to use git as the bitbucket documentation explains, and - under Debian GNU/Linux, it worked flawlessly as described, but - under Solaris 10, I got Authentication Failed message. I even tried to run truss to see anything is suspicious but could not find any smoking gun under solaris why it failed. ldd git-binary didnd't show anything suspicious either (except for the libcrypt library which could be a suspicious to think about export restrictions. Have they shipped incompatible version? BUT since the password is typed into https: connection, I suspect it is only a matter of web-level cryptography and should be universal these days.) I am now tempted to compile git suite under solaris 10, but I did find people who seem to be using git with bitbucket under solaris 10 and am wondering what could be wrong.

    Read the article

  • Oracle Java Olympics Between Russia, Ukraine, Belarus, Ukraine and Kazakhstan

    - by Tori Wieldt
    Last month, 151 universities in 11 locations (Saint-Petersburg, Moscow, Donetsk, Tomsk, Odessa, Rostov-on-Don, Ekaterinburg, Khabarovsk, Almaty, Kiev, and Samara) competed in the second round of the Oracle Java Olympics. For two weeks in February, the best university students from Russia, Ukraine, Belarus, Ukraine and Kazakhstan were invited to compete with each other and prove just how good they are in Java programming.  A team of engineers from Oracle Development center in Saint-Petersburg prepared the set of problems to solve during the competition. To win, participants needed to show deep knowledge of Java technologies from Classloader and NIO to Reflection and JavaDB. Students in each location had a PC with Oracle JDK 1.7u2 and Netbeans 7.1.  As a testing system, the organizers used the open source software Ejudge (with several tweaks specifically for the competition).  Participants submitted their solutions to the remote server where they were tested by prepared test harnesses. All results were posted in real-time. "I followed the competition coming in from the many sites, and it was a really exciting experience, like a horse race or football game!" exclaimed Java Evangelist Alexander Belokrylov. Congratulations to everyone who competed! The Olympic finals will on April 4th. 

    Read the article

  • nautilus crash when merging/overwriting files

    - by sBlatt
    On my Ubuntu 10.10, whenever I want to copy some files/folders over some other files/folders, or when I try to empty the trash, nautilus crashes! Example: I have a folder with some files. Now I want to overwrite this folder with a folder with the same name, same files, but some additional files, the merge window comes up, I choose merge and nautilus crashes (does not respond, when I press the close button I can force close it). Some times it even does the copying/emptying (trash), but it always crashes! This happens when copying to the same partition/ntfs partition/netshares, but not when I make a new folder and copy the files/folders into that (without overwriting anything). On a netshare, it's even possible to merge these files afterwards with another computer! dmesg/syslog/messages does not show any entry related to that problem. Does anyone have a solution for this very annoying problem? EDIT: dpkg -l nautilus* (see output in pastebin) EDIT2: I found out, nautilus already crashes before clicking replace/merge (as soon as the question appeares. In the video it's not entirely clear, that i click the cross before the force-close dialog appeares. Video of problem nautilus-debug-log.txt EDIT3: Filed bugreport: https://bugs.launchpad.net/ubuntu/+source/nautilus/+bug/678233

    Read the article

  • Unable to log iptables

    - by ActuatedCrayon
    I'm having trouble getting iptables to log to any file. My iptables looks like: Chain INPUT (policy ACCEPT 1366 packets, 433582 bytes) pkts bytes target prot opt in out source destination 869 60656 LOG icmp -- venet0 * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 7 Syslogd is the only log helper running. The default syslog.conf didn't work, so I tried adding "kern.=debug -/var/log/iptables.log". But the file already has "kern.* -/var/log/kern.log". There are recent syslog entries, so it's not a permissions thing. I'm running Ubuntu 12.04.1 with 2.6.32-042stab061.2

    Read the article

  • ArchBeat Link-o-Rama for November 29, 2012

    - by Bob Rhubart
    Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal | Murali Veligeti A detailed technical post with supporting downloads from Murali Veligeti. Business SOA: When to shout, the art of constructive destruction Communication skills are essential for architects. Sometimes that means raising your voice. Steve Jones shares some tips for effective communication when the time comes to let it all out. Centralized Transaction Management for ADF Data Control | Andrejus Baranovskis Oracle ACE Director and prolific blogger Andrejus Baranovskis shares instructions and a sample application to illustrate how to implement centralized Commit/Rollback management in an ADF application. Collaborative Police across multiple stakeholders and jurisdictions | Joop Koster Capgemini Oracle Solution Architect Joop Koster raises some interesting IT issues regarding the challenges facing international law enforcement. Architected Systems: "If you don't develop an architecture, you will get one anyway…" "Can you build a system without taking care of architecture?" asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Thought for the Day "Good judgment comes from experience, and experience comes from bad judgment. " — Frederick P. Brooks Source: Quotes for Software Engineers

    Read the article

  • Can an internally developed fast evolving, agile, short sprint web application lend itself to offshoring?

    - by Gavin Howden
    I have recently been set a target to achieve readiness to successfully manage and deliver results through the usage of offshore teams on our mainline development project within 12 months. Our mainline is a multi-thousand user highly available web application, and various related SAAS components delivered through the above mentioned web application. We work agile on the mainline with a rapid 1 week sprint using continuous integration. Our delivery platform is a bespoke php framework, although we have some .net services and components in the mix. My view is: an offshore team could work if we either ship out an entire isolated project for offshore development, or we specify a component for our system in huge detail up front. But we don't currently work like that, and it will conflict with the in-house method, and unless the off-shore is working within our team, with our development/deployment chain it could be an integration nightmare. So my question is, given we have a closed source bespoke framework (Private IP) which we train our developers to use, and we work agile minimising documentation, maximising communication and responding to rapidly changing requirements, and much of the quality control is via team skills building and peer review, how can I make off-shoring work on our main line development?

    Read the article

  • Public DNS Server fails on Windows Amazon EC2

    - by Adroidist
    I have started a new Windows server instance on Amazon EC2. The security group has the following rules: Ports Protocol Source 22 tcp 0.0.0.0/0 80 tcp 0.0.0.0/0 443 tcp 0.0.0.0/0 3389 tcp 0.0.0.0/0 53 udp 0.0.0.0/0 -1 icmp 0.0.0.0/0 I am able to ping the public DNS server of the machine and i can connect to it using Windows Remote Desktop connection. However, when i put in my web browser the public DNS server, it fails to connect. Morever, I used filezilla and putty (and in both I loaded the private key .pem) but i receive connection timed out. I disabled the firewall on both my pc and the instance (which I entered using Remote desktop connection). Can you please tell me what I am missing?

    Read the article

  • What software can copy the whole hard drive with Operating System to DVD-R, and be able to "refresh

    - by Jian Lin
    What software can take a snapshot of a Win XP or Win 7 machine -- burning all files into a DVD-R, and then be able to boot from that DVD-R can restore the whole machine back to that state stored inside the DVD-R? Maybe for Win XP, it is easier as the OS can be just 1 or 2GB on the hard drive, but for Win 7, a fresh installation is already 16GB on the hard drive, so it will need several DVD-R to take the snapshot? thanks. (any of these software are open source?)

    Read the article

  • Looking for a small, light scene graph style abstraction lib for shader based OpenGL

    - by Pris
    I'm looking for a 'lean and mean' c/c++ scene graph library for OpenGL that doesn't use any deprecated functionality. It should be cross platform (strictly speaking I just dev on Linux so no love lost if it doesn't work on Windows), and it should be possible to deploy to mobile targets (ie OpenGLES2, and no crazy mandatory dependencies that wouldn't port well to modern mobile frameworks like iOS, Android, etc), with a license that's compatible with closed source software (LGPL or more liberal). Specific nice-to-haves would be: Cameras and Viewers (trackball, fly-by, etc) Object transform hierarchies (if B is a child of A, and you move A, B has the same transform applied to it) Simple animation Scene optimization (frustum culling, use VBOs, minimize state changes, etc) Text I've played around with OpenSceneGraph a lot and it's pretty amazing for fixed function pipeline stuff, but I've had a few of problems using it with the programmable pipeline and after going through their mailing list, it seems several people have had similar issues (going back years). Kitware's VES looks neat (http://www.vtk.org/Wiki/VES), but VES + VTK is pretty heavy. VTK is also typically for analyzing scientific data and I've read that it's not that appropriate for a general use case (not that great at rendering a lot of objects on scene,etc) I'm currently looking at VisualizationLibrary (http://www.visualizationlibrary.org/documentation/pag_gallery.html) which looks like it offers some of the functionality I'd like, but it doesn't explicitly support mobile targets. Other solutions like Ogre, Horde3D, Irrlicht, etc tend to be full on game engines and that's not really what I'm looking for. I'd like some suggestions for other libraries that I may have missed... please note I'm not willing to roll my own solution from scratch.

    Read the article

  • supervisord failed to start nagiosapi after reboot, need to run reload manually

    - by Bajingan Keparat
    I have supervisord to start nagiosapi everytime the server starts. The API created a status dump file called status.dat, which will get updated periodically. The following is the conf file that starts the api. [program:nagapi] directory = /home/nagapi user = api command = /bin/bash -c "source /home/nagapi/.virtualenvs/nagapi/bin/activate; /home/nagapi/nagios-api/nagios-api" stdout_logfile = /home/nagapi/supervisor_nagios-api_stdout.log stderr_logfile = /home/nagapi/supervisor_nagios-api_stderr.log Everytime i restart the server, supervisord cannot start the api. stderr.log claims that it cannot find the status.dat file located in /var/cache/nagios3. It seems like the files was not created yet when supervisor tried to run the api the first time. I'm saying this because if i do a supervisorctl reload, everything would reload just fine, and the api would run ok about 50 seconds after the reload command completes. should i change the command option of the conf file to check for

    Read the article

  • Exchange Server 2007 Transport Errors -- SMTP Session Hangs

    - by devviedev
    Hello, I was doing some changes to the server (writing an Transport Agent). After trying to install it, I started to get some errors. Now, when connecting to the SMTP server the session hangs just after finishing the DATA section. I'm not sure what happened, I disabled my transport agent and uninstalled it, then restarted the server. The problem persists. In the Event Viewer, four of the same errors show up: Source: FSCTransportScanner Category: Scan Error Event ID: 5021 Description: Unable to retrieve internet monitor interface. What could have happened?

    Read the article

  • How to structure git repositories for project?

    - by littledynamo
    I'm working on a content synchronisation module for Drupal. There is a server module, which sits on ona website and exposes content via a web service. There is a also a client module, which sits on a different site and fetches and imports the content at regular intervals. The server is created on Drupal 6. The client is created on Drupal 7. There is going to be a need for a Druapl 7 version of the server. And then there will be a need for a Drupal 8 version of both the client and the server once it is released next year. I'm fairly new to git and source control, so I was wondering what is the best way to setup the git repositories? Would it be a case of having a separate repository for each instance, i.e: Drupal 6 server = 1 repository Drupal 6 client = 1 repository Drupal 7 server = 1 repository Drupal 7 client = 1 repository etc Or would it make more sense to have one repository for the server and another for the client then create branches for each Drupal version? Currently I have 2 repositories - one for the client and another for the server.

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • import macintosh thunderbird emails to windows email

    - by Ryan
    My mac laptop died. The hard drive is good. I am able to pull what I need off my mac and copy it to windows using an open source tool (I forgot the name of it). I need my old mac emails. I was using the macintosh thunderbird for emails. How do I load those emails into thunderbird, outlook, or any other windows email tool? I did google this and there were some brief explanations, but they did not work. The vast majority of posts are about going from windows to mac.

    Read the article

  • Visio 2010 Reverse Engineer Oracle

    - by digitall
    I have used Visio 2007 in the past to reverse engineer Oracle databases to get a flow scheme. I believe all Office 2007 products were x86 as well which is where I suspect my issue currently lies. I have since upgraded to Visio 2010 x64 and when I go to reverse engineer something from Oracle it shows up under Installed Visio Drivers but I can't seem to create a data source using it. My assumption here is it is because Oracle doesn't play nicely with x64 and with Visio being compiled as x64 I don't even get the option to use it. Has anyone done this with Visio 2010 x64 and Oracle yet? Or are there other tools you would recommend to reverse engineer and get a model such as the one generated by Visio?

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

  • Is these company terms good for a programmer or should I move?

    - by o_O
    Here are some of the terms and conditions set forward by my employer. Does these make sense for a job like programming? No freelancing in any way even in your free time outside company work hours (may be okay. May be they wanted their employees to be fully concentrating on their full time job. Also they don't want their employees to do similar work for a competing client. Completely rational in that sense). - So sort of agreed. Any thing you develop like ideas, design, code etc while I'm employed there, makes them the owner of that. Seriously? Don't you think that its bad (for me)? If I'm to develop something in my free time (by cutting down sleep and hard working), outside the company time and resource, is that claim rational? I heard that Steve Wozniak had such a contract while he was working at HP. But that sort of hardware design and also those companies pay well, when compared to the peanuts I get. No other kind of works allowed. Means no open source stuffs. Fully dedicated to being a puppet for the employer, though the working environment is sort of okay. According to my assessment this place would score a 10/12 in Joel's test. So are these terms okay especially considering the fact that I'm underpaid with peanuts?

    Read the article

  • PHP file gets download instead of getting executed when browsed in any browser

    - by baltusaj
    I have a phpinfo.php file which I am trying to run by browsing to it using browser but the browser downloads the file instead of executing it. phpinfo.php <?php phpinfo(); ?> I followed following this post, added following lines to my /etc/apache2/httpd.conf and restarted apache but invain. phpinfo.php still gets downloaded. AddType application/x-httpd-php .php .phtml AddType application/x-httpd-php-source .phps Have I added these line to the correct file? On an openSuSE forum following was mentioned. I followed this too but still no success. Same problem is persisting. In case the browser wants to save your php files instead of displaying the content, you should enable php support in the /etc/apache2/mod_userdir.conf file. Add the following line to it, just after the line and restart the server. Include /etc/apache2/conf.d/php5.conf

    Read the article

  • ArchBeat Link-o-Rama for December 14, 2012

    - by Bob Rhubart
    JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes | John-Brown Evans John Brown Evans' post continues the series of JMS articles that demonstrate how to use JMS queues in a SOA context. "This example leads you through the creation of an Oracle database Advanced Queue and the related WebLogic server objects in order to use AQ JMS in connection with a SOA composite," John explains. And if you missed the first 5 steps, don't worry – the post includes links. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Would you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Using Oracle Enterprise Manager Cloud Control 12c with Filer Snapshotting | Porus Homi Havewala This concise technical article includes a script for database backup using snapshots and cataloging in RMAN. Thought for the Day "A program which perfectly meets a lousy specification is a lousy program." — Cem Kaner Source: SoftwareQuotes.com

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Figure: Create a new folder Give the local TFS WPG group full control of the directory   Figure: You need to use the App Tier Service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). Figure: The web.config for TFS is stored in the application folder <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> Figure: Adding this to the web.config will trigger a restart of the app pool Figure: Your web.config should look something like this The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Give the local TFS WPG group full control of the directory Figure: You need to use the App Tier service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • How do I choose which way to enable/disable, start/stop, or check the status of a service?

    - by Glyph
    If I want to start a system installed service, I can do: # /etc/init.d/some-svc start # initctl start some-svc # service some-svc start # start some-svc If I want to disable a service from running at boot, I can do: # rm /etc/rc2.d/S99some-svc # update-rc.d some-svc disable # mv /etc/init/some-svc.conf /etc/init/some-svc.conf.disabled Then there are similarly various things I can do to enable services for starting at boot, and so on. I'm aware of the fact that upstart is a (relatively) new thing, and I know about how SysV init used to work, and I'm vaguely aware of a bunch of D-Bus nonsense, but what I don't know is how one is actually intended to interface with this stuff. For example, I don't know how to easily determine whether a service is an Upstart job or a legacy SysV thing, without actually reading through the source of its shell scripts extensively. So: if I want to start or stop a service, either at the moment or persistently, which of these tools should I use, and why? If the answer depends on some attribute (like "this service supports upstart") then how do I quickly and easily learn about that attribute of an installed package? Relatedly, are there any user interface tools which can safely and correctly interact with the modern service infrastructure (upstart, and/or whatever its sysv compatibility is)? For example, could I reliably use sysv-rc-conf to determine which services should start?

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >