Search Results

Search found 13928 results on 558 pages for 'changes'.

Page 175/558 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Should my colleagues review each others code from source control system?

    - by Daniel Excinsky
    Hi everybody. So that's my story: one of my colleagues uses to review all the code, hosted to revision system. I'm not speaking about adequate review of changes in parts that he belongs to. He watches the code file to file, line to line. Every new file and every modified. I feel just like being spied on! My guess is that if code was already hosted to control system, you should trust it as workable at least. My question is, maybe I'm just too paranoiac and practice of reviewing each others code is good? P.S: We're team of only three developers, and I fear that if there will be more of us, colleague just won't have time to review all the the code we'll write.

    Read the article

  • Need a Holistic view of your Concurrent Processing?

    - by cwarticki
    Need a Holistic view of your Concurrent Processing? Choose CP AnalyzerGo to Doc 1411723.1 for more details and script download. The Concurrent Processing Analyzer is a Self-Service Health-Check script which reviews the overall Concurrent Processing Footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices. This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome. E-Business Applications Concurrent Processing Analyzer Overview E-Business Applications Concurrent Request Analysis E-Business Applications Concurrent Manager Analysis Identifies Concurrent System Setup and configurations Identifies and recommends Concurrent Best Practices Easy to add Tool for regular Concurrent Maintenance Execute Analysis anytime to compare trending from past outputs Feedback welcome!

    Read the article

  • Rain effect looks like snowfall effect?

    - by Nikhil Lamba
    i am making a game in that game i want rain effect i am little bit far from this right now i am doing like below particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But its looks like snowfall effect what changes i can do for make it rain effect please correct me EDIT : here is link for snapshot http://i.imgur.com/bRIMP.png

    Read the article

  • Salesforce deployment guideline using Sandbox

    - by ybbest
    Create Deployment connection Enable the inbound change set settings on the destination Environment you would like to deploy the solution to. Enable the outbound change set settings on the source Environment where you package your application. The best practice is to Package everything in the changeset and salesforce will only deploy the change into your destination environment. If you only package the change, you could miss some of the changes. You can clone the change set on the source destination however the initial packaging takes some time as you need to go through everything and select the components manually. After the change set is packaged, you need to upload the chagneset so that destination environment can see the change set in its incoming change set list. Click Validate the change set before deployment. References: Development Lifecycle Guide Change Sets Best Practices

    Read the article

  • modifying openssl library code

    - by Nouar Ismail
    I am ordered to check the availability to customize an encryption algorithm the IPsec protocol use in Ubuntu, if anyone have any suggestion about this point?. I've read that the encryption operation occur in libcrypto in openssl. when I tried to compile and install OpenSSL from source ..I had everything ok with the installation, but when to check the version installed on the system, with "dpkg -s openssl", it didn't seem that it's the version i had already installed, maybe it had been installed successfully, but the question is: would it be the version the system use for encryption operations? would it overwrite the old version? and would my changes in code have effects ? any help please? thank you in advance.

    Read the article

  • Moving around/avoiding obstacles

    - by János Harsányi
    I would like to write a "game", where you can place an obstacle (red), and then the black dot tries to avoid it, and get to the green target. I'm using a very easy way to avoid it, if the black dot is close to the red, it changes its direction, and moves for a while, then it moves forward to the green point. How could I create a "smooth" path for the computer controlled "player"? Edit: Not the smoothness is the main point, but to avoid the red blocking "wall" and not to crash into it and then avoid it. How could I implement some path finding algorithm if I just have basically 3 points? (And what would it make the things much more complicated, if you could place multiple obstacles?)

    Read the article

  • Can't make the Multimedia keys work in LXDE

    - by Uri Herrera
    I've tried to edit the configuration file in ~/.config/openbox/lxde-rc.xml however the changes don't take effect, i still can't use the keys. The LXDE wiki shows that after finding that my keyboard sends the correct standardized keyboard events i should go and edit the file to my liking, the keyboard does send the correct events and the keys are properly identified yet nothing happens. Since LXDE doesn't come with a sound preferences option i have installed xfce4-mixer. I too have a netbook running Lubuntu and it's media keys are working fine, i don't know if what i installed has had something to do here?. I also installed xfce4-volumed volume keys daemon from synaptic but to no avail, still nothing. How can i make the multimedia keys of my keyboard work in LXDE with xfce4-mixer Edit: Here is my lxde-rc.xml file

    Read the article

  • Compiz stop working

    - by Aikanáro
    I'm on a laptop sony vaio, vng-nw330f with ubuntu 11.10. One day I used my computer normally, I shutdown and the other day when I turn on it again, all was different. Compiz effects is not working at all, it doesn't matter what plugin (from ccsm) i enable or disable, nothings happens, nothings changes. I tried next commands: unity --reset unity --reset-icons sudo rm -rf .config .gnome .gnome2 .gnome2_private .compiz .icons .fonts .nautilus .themes .qt .local (from my home directory) gconftool-2 --recursive-unset /apps/compiz-1 I try reinstalling compiz (I remove compiz from software center and installed it again). I have just a couple of weeks using ubuntu, I'm not sure if exist a desktop call it unity 3d, but I think that it's missing. I want the graphics interface of ubuntu 11.10 by default.

    Read the article

  • Are programmers tied to their code?

    - by Jason
    Assuming you are working for a company or in a team of developers, is the code bound to the person who did it? When someone develops a particular functionnality or an area of the application, is this person the only one who can, is allowed or is just able to make changes to it? I personnally think that a well-done piece of code or program should be easily modified by any programmers, but what about what you see in your environment? At my work, I'd say that the majority of the code can be modified by anyone (that's what coding standards are for right?). There are some things though that are 'property' of some coworkers like the module that handles the pay or some important functionality of the production module (we are developing an ERP system). What about your work place? Am I the only one living this?

    Read the article

  • WCF service and security

    - by Gaz83
    Been building a WP7 app and now I need it to communicate to a WCF service I made to make changes to an SQL database. I am a little concerned about security as the user name and password for accessing the SQL database is in the App.Config. I have read in places that you can encrypt the user name and password in the config file. As the username and password is never exposed to the clients connected to the WCF service, would security in my situation be much of a problem? Just in case anyone suggests a method of security, I do not have SSL on my web server.

    Read the article

  • .NET - refactoring code

    - by w0051977
    I have inherited and now further develop a large application consisting of an ASP.NET application, VB6 and VB.NET application. The software was poorly written. I am trying to refactor the code as I go along. The changes I am making are not live (they are contained in a folder on my development machine). This is proving to be time consuming and I am doing this along side other work which is the prioritiy. My question is: is this a practical approach or is there a better methodology for refactoring code? I don't have any experience with version control software or source control software and I am wandering if this is what I am missing. I am a sole developer.

    Read the article

  • Recovery from URL structure change?

    - by Dejan Pelzel
    in July this year, we have changed the URL structure of the website from: Post: domain.com/blog/post/986/dance/heart-beats-dance-video-by-chinatsu/ Category: domain.com/blog/index/cosplay/ to Post: domain.com/dance/heart-beats-dance-video-by-chinatsu-986/ Category: domain.com/cosplay/ Everything was (supposedly) properly redirected with 301 redirects and it first seemed that the traffic returned after a couple of days, but it has now been close to 2 months and things keep going worse although Google is slowly indexing the changes. What is worrying me even more is that the Pages crawled per day from Webmaster Tools started drastically dropping a few days ago and has just reached a new low in months (from over 2000 to 700). Should I be worried or will things sort out eventually?

    Read the article

  • Is there a variable width font that does not change width when adding effects like bold, italic?

    - by George Bailey
    NetBeans has a word wrap feature now - but if the font changes width when bold then it gets all jumpy and sometimes hard to work with. Edit: It turns out that even with Courier New that NetBeans word wrap still jumps up and down lines at a time at random. I guess that this question no longer cares for an answer. However,, it seems that there is no answer. (at least nobody has brought one up yet) I am currently using Comic Sans MS which gets wider when bold.

    Read the article

  • Oneiric desktop looks seriously defect after updates

    - by Matthijs
    Probably a coincidence, but after applying the fix to How to get the proper LightDM theme? and running the latest batch of updates, my desktop suddenly looked like this after rebooting: Funny thing is, it seems to be account specific, as my girlfriend's account seems unaffected. I can't logout because the whole entry is missing on the top right. Rebooting doesn't solve the issue. System settings (when run from the launcher) lacks a whole lot of icons and changeing the theme doesn't solve the problem (only changes the window top bar). I'm going to try the solution in How do I fix my theme? first, but if that doesn't work (likely, since I seem to have a slightly bigger problem than the one in that question) I'd love something else to try (other than creating a new account).

    Read the article

  • Implications of Java 6 End of Public Updates for EBS Users

    - by Steven Chan (Oracle Development)
    The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates after February 2013. In other words, nothing will change for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6. These Java SE 6 updates will be made available to EBS users for the Extended Support periods documented in the Oracle Lifetime Support policy document for Oracle Applications (PDF): EBS 11i Extended Support ends November 2013 EBS 12.0 Extended Support ends January 2015 EBS 12.1 Extended Support ends December 2018 Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? No. This upgrade will be highly recommended but currently remains optional. JRE 6 will be available to Windows users to run with EBS for the duration of your respective EBS Extended Support period.  Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  The certification of Oracle E-Business Suite with JRE 7 (for desktop clients accessing EBS Forms-based content) is in its final stages.  If you plan to upgrade your EBS desktop clients to JRE 7 when that certification is released, you can get a head-start on that today. Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 will be highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290801.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What will Mac users need? Oracle will provide updates to JRE 7 for Mac OS X users. EBS users running Macs will need to upgrade to JRE 7 to receive JRE updates. The certification of Oracle E-Business Suite with JRE 7 for Mac-based desktop clients accessing EBS Forms-based content is underway. Mac users waiting for that certification may find this article useful: How to Reenable Apple Java 6 Plug-in for Mac EBS Users Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? No. This upgrade will be highly recommended but will be optional for EBS application tier servers running on Windows, Linux, and Solaris.  You can choose to remain on JDK 6 for the duration of your respective EBS Extended Support period.  If you remain on JDK 6, you will continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6. The certification of Oracle E-Business Suite with JDK 7 for EBS application tier servers on Windows, Linux, and Solaris as well as other platforms such as IBM AIX and HP-UX is planned.  Customers running platforms other than Windows, Linux, and Solaris should refer to their Java vendors's sites for more information about their support policies. Related Articles Planning Bulletin for JRE 7: What EBS Customers Can Do Today EBS 11i and 12.1 Support Timeline Changes Frequently Asked Questions about Latest EBS Support Changes Critical Patch Updates During EBS 11i Exception to Sustaining Support Period

    Read the article

  • C#/.net features to cut off assuming no backward compatibility needed?

    - by Gulshan
    Any product or framework evolves. Mainly it's done to catch up the needs of it's users, leverage new computing powers and simply make it better. Sometimes the primary design goal also changes with the product. C# or .net framework is no exception. As we see, the present day 4th version is very much different comparing with the first one. But thing comes as a barricade to this evolution- backward compatibility. In most of frameworks/products there are features would have been cut off if there was no need to support backward compatibility. According to you, what are these features in C#/.net? Please mention one feature per answer.

    Read the article

  • How can the Ubuntu font be used with LyX or LaTeX?

    - by dv3500ea
    I use LyX for creating documents and would like to be able to format the output of my documents so that they use the Ubuntu font. In the LyX document settings, it appears that there are only a fixed number of fonts available. Is it possible to add the Ubuntu font to this list? If not, is there a way to use the Ubuntu font in LaTeX? I can export the LyX document to LaTeX, make my changes and then use pdflatex & co. to create a formatted document.

    Read the article

  • Backbone.js, Rails and code duplication

    - by Matteo Pagliazzi
    I'm building a web app and I need a JS framework like Backbone.js to work with my backend rovided by Rails that mostly return JSON objects after DB queries. Searching on the web I've discovered Backbone which seems to be complete, quite populare and actively developed but I've noticed that a lot of things done by Backbone are simply a duplicte of the works done by Rails: for example validation and models. My idea of "perfect" (for my actual needs) JS mvc (it can't be called mvc but i don't have any other names) is something really simple that has a function for each action in my Rails controller that are triggered by a specific event (user/hash changes, click on a button...) and send requests to the server that respond with a JSON object then I'll load a template or execute some JS code. Do you have any concern/suggestion about my idea? Do you know some "micro" js framework like what i have described? If you have worked with backone.js + rails what can you suggest me?

    Read the article

  • Delta-update Firefox Aurora package from PPA

    - by ignite
    I am using Firefox Aurora in my Ubuntu 12.04 which I have installed via its ppa (ppa:ubuntu-mozilla-daily/firefox-aurora). As expected of Aurora, I get an update usually after 2-3 days. I have Firefox Aurora installed in Windows too. There also I get updates in 2-3 days but size of update is usually 4-5 MB, while in Ubuntu it's always around 20 MB. What is the reason for this difference? Is there any way by which I can download and install only the changes and not the entire Aurora again and again?

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • The JavaOne 2012 Sunday Technical Keynote

    - by Janice J. Heiss
    At the JavaOne 2012 Sunday Technical Keynote, held at the Masonic Auditorium, Mark Reinhold, Chief Architect, Java Platform Group, stated that they were going to do things a bit differently--"rather than 20 minutes of SE, and 20 minutes of FX, and 20 minutes of EE, we're going to mix it up a little," he said. "For much of it, we're going to be showing a single application, to show off some of the great work that's been done in the last year, and how Java can scale well--from the cloud all the way down to some very small embedded devices, and how JavaFX scales right along with it."Richard Bair and Jasper Potts from the JavaFX team demonstrated a JavaOne schedule builder application with impressive navigation, animation, pop-overs, and transitions. They noted that the application runs seamlessly on either Windows or Macs, running Java 7. They then ran the same application on an Ubuntu Linux machine--"it just works," said Blair.The JavaFX duo next put the recently released JavaFX Scene Builder through its paces -- dragging and dropping various image assets to build the application's UI, then fine tuning a CSS file for the finished look and feel. Among many other new features, in the past six months, JavaFX has released support for H.264 and HTTP live streaming, "so you can get all the real media playing inside your JavaFX application," said Bair. And in their developer preview builds of JavaFX 8, they've now split the rendering thread from the UI thread, to better take advantage of multi-core architectures.Next, Brian Goetz, Java Language Architect, explored language and library features planned for Java SE 8, including Lambda expressions and better parallel libraries. These feature changes both simplify code and free-up libraries to more effectively use parallelism. "It's currently still a lot of work to convert an application from serial to parallel," noted Goetz.Reinhold had previously boasted of Java scaling down to "small embedded devices," so Blair and Potts next ran their schedule builder application on a small embedded PandaBoard system with an OMAP4 chip set. Connected to a touch screen, the embedded board ran the same JavaFX application previously seen on the desktop systems, but now running on Java SE Embedded. (The systems can be seen and tried at four of the nearby JavaOne hotels.) Bob Vandette, Java Embedded Architect, then displayed a $25 Rasberry Pi ARM-based system running Java SE Embedded, noting the even greater need for the platform independence of Java in such highly varied embedded processor spaces. Reinhold and Vandetta discussed Project Jigsaw, the planned modularization of the Java SE platform, and its deferral from the Java 8 release to Java 9. Reinhold demonstrated the promise of Jigsaw by running a modularized demo version of the earlier schedule builder application on the resource constrained Rasberry Pi system--although the demo gods were not smiling down, and the application ultimately crashed.Reinhold urged developers to become involved in the Java 8 development process--getting the weekly builds, trying out their current code, and trying out the new features:http://openjdk.java.net/projects/jdk8http://openjdk.java.net/projects/jdk8/spechttp://jdk8.java.netFrom there, Arun Gupta explored Java EE. The primary themes of Java EE 7, Gupta stated, will be greater productivity, and HTML 5 functionality (WebSocket, JSON, and HTML 5 forms). Part of the planned productivity increase of the release will come from a reduction in writing boilerplate code--through the widespread use of dependency injection in the platform, along with default data sources and default connection factories. Gupta noted the inclusion of JAX-RS in the web profile, the changes and improvements found in JMS 2.0, as well as enhancements to Java EE 7 in terms of JPA 2.1 and EJB 3.2. GlassFish 4 is the reference implementation of Java EE 7, and currently includes WebSocket, JSON, JAX-RS 2.0, JMS 2.0, and more. The final release is targeted for Q2, 2013. Looking forward to Java EE 8, Gupta explored how the platform will provide multi-tenancy for applications, modularity based on Jigsaw, and cloud architecture. Meanwhile, Project Avatar is the group's incubator project for designing an end-to-end framework for building HTML 5 applications. Santiago Pericas-Geertsen joined Gupta to demonstrate their "Angry Bids" auction/live-bid/chat application using many of the enhancements of Java EE 7, along with an Avatar HTML 5 infrastructure, and running on the GlassFish reference implementation.Finally, Gupta covered Project Easel, an advanced tooling capability in NetBeans for HTML5. John Ceccarelli, NetBeans Engineering Director, joined Gupta to demonstrate creating an HTML 5 project from within NetBeans--formatting the project for both desktop and smartphone implementations. Ceccarelli noted that NetBeans 7.3 beta will be released later this week, and will include support for creating such HTML 5 project types. Gupta directed conference attendees to: http://glassfish.org/javaone2012 for everything about Java EE and GlassFish at JavaOne 2012.

    Read the article

  • How do you balance documentation requirements with Agile developments

    - by Jeremy
    In our development group there is currently discussions around agile and waterfal methodology. No-one has any practical experience with agile, but we are doing some reading. The agile manifesto lists 4 values: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan We are an internal development group developing applications for the consumption of other units in our enterprise. A team of 10 developers builds and releases multiple projects simultanously, typically with 1 - maybe 2 (rarely) developer on each project. It seems to be that from a supportability perspective the organization needs to put some real value on documentation - as without it, there are serious risks with resourcing changes. With agile favouring interactions, and software deliverables over processes and documentation, how do you balance that with the requirements of supportable systems and maintaining knowledge and understanding of how those systems work? With a waterfall approach which favours documentation (requirements before design, design specs before construction) it is easy to build a process that meets some of the organizational requirements - how do we do this with an agile approach?

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • After the upgrading to 13.10, I can't input Japanese and Chinese in Emacs

    - by oda
    I have just upgraded Ubuntu from 13.04 to 13.10. It seems iBus have been made big changes.Then I just go to system setting - text entry settings - add "Chinese pinyin" and "Japanese anty" input method. It works well when I input Chinese or Japanese in terminal or .txt file. But when I want to input Chinese and Japanese in Emacs. Even though I have enable ibus-mode in the buffer and change to Chinese pinyin or Japanese anty input method. It just output the English word. Below is the ibus configure in .emacs.By the way, It works well before I upgrade Ubuntu to 13.10 and Emacs to 24.3.1. (add-to-list 'load-path (concat my-emacs-path "/ibus-el-0.3.2")) ;;(setq ibus-python-shell-command-name "python2.7") (require 'ibus) ;; Turn on ibus-mode automatically after loading .emacs (add-hook 'after-init-hook 'ibus-mode-on) (setq ibus-cursor-color '("red" "blue" "limegreen"))

    Read the article

  • Live boot from hard disk problem

    - by user172277
    I've installed Ubuntu Desktop 13.04 32bit. Next I configured /etc/grub.d/40_custom to boot live system from ubuntu.iso (also Desktop 13.04 32 bit) I used configuration: menuentry "Ubuntu 13.04 Desktop" { loopback loop /boot/ubuntu.iso linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=/boot/ubuntu.iso noeject noprompt splash -- initrd (loop)/casper/initrd.lz } and it works OK. Later I made some changes on my installed ubuntu. I made some configuration, installed additional packages and so on. After that I made backup using remastersys tool. Remastersys gave me new ISO file. So I wanted to use it. And here was first problem. Remastersys creates only initrd.gz file. So I changed the grub configuration form: initrd (loop)/casper/initrd.lz to: initrd (loop)/casper/initrd.gz But after that, when I reboot my system I get error: /init: line 3: can't open /dev/sr0: No medium found Any ideas how to fix it? Best Regards, Bartosz

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >