Search Results

Search found 17054 results on 683 pages for 'wordpress plugin dev'.

Page 590/683 | < Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >

  • Improve Playback Using Enhancements in Windows Media Player 12

    - by DigitalGeekery
    Are you looking for ways to improve the playback of your media in Windows Media Player 12? We’ll show you how to do that by using the enhancements in WMP 12. If you are in Library mode, you’ll need to click the icon at the lower right to switch to Now Playing mode. Right-click anywhere in Media Player while in Now Playing mode, select Enhancements, and select any of the available options.   You can switch between the individual enhancements by clicking the right and left buttons at the top left.   Crossfading and Auto Volume Leveling The Auto Volume Leveling setting is just a simple toggle on and off. If your MP3 or WMA files have volume leveling information values.   You can automatically add volume leveling information values to all files you add to your library by switching to Library view, going to Tools > Options, and selecting Add volume leveling information values for new files on the Library tab. Click OK when finished.   Crossfading will gradually decrease the volume of the song that is ending (fade out) and increase volume of the song that is beginning. Click Turn on Crossfading and then click and drag the slider left or right change the amount of overlap between tracks. Graphic Equalizer The graphic equalizer is toggled on and off by clicking Turn on / Turn off at the top left. You can select pre-defined equalizer settings by music genre by clicking the Default list. The radio buttons on the left allow you to move the sliders individually, in a loose group or a tight group. You can always return to the default settings by clicking Reset. Play Speed Settings Choose a pre-defined settings by clicking Slow, Normal, or Fast. Uncheck the Snap slider to common speeds the move the slider right and left to your desired speed. If nothing else, these settings provide a little fun and amusement. Quiet Mode Quiet mode will level out any sharp volume highs and lows within a single track. Simply toggle the setting on or off and select whether you prefer Medium difference or Little difference by selecting one of the radio buttons. SRS WOW effects SRS WOW effects enhance low-frequency and stereo sound performance. Click Turn on to enable the TruBass and WOW Effect sliders. You can also optimize for your speaker type. Click to switch between Regular, Large, and Headphones. Video Settings Video Settings allow you to adjust the Hue, Brightness, Saturation, and Contrast.   You can also adjust the zoom settings by clicking Select video zoom settings.   Dolby Digital Settings Choose between Normal, Night, and Theater settings to adjust the audio for Dolby Digital content. This setting will only effect media with Dolby Digital sound. Looking for more ways to improve your media experience in WMP 12? Check out how to update metadata and cover art and how to share media with other Windows 7 computers on your home network. Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxHow To Rip a Music CD in Windows 7 Media CenterStream Media from Windows 7 to XP with VLC Media PlayerInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • input / output error, drives randomly refusing to read / write

    - by ILMV
    I have an issue with one of our servers running Ubuntu 10.04, it is running BackupPC and collects backups from various machines / servers around the building. On the 8th minute (12:08, 12:18, 12:28 etc) the backups are transferred to an external hard drive, we have three and rotate one drive for another everyday. The problem we are having is we are randomly experiencing input / output errors, when this happens you cannot read / write to the drive, it hasn't unmounted so I can cd to the mount point /media/backup1. The drives are not faulty as it's happening on all of them, so I'm at a loss as to what the problem could be, here is an example of the many errors we get: gzip: stdout: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_1083_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1088_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1089_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 39: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_592_tech3.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error EDIT » Resolved So it turns out Quamis was right, even though I didn't think it was possible it was actually a problem with the drive. You see we have three drives all formatted to ext2, on two of them we were getting I/O errors frequently, I cam back to Quamis' answer and discovered the fsck command, so ran it against the problems drives: fsck /dev/sdb1 This found and fixed a load of problems on the drive, most probably caused by power outages / unsafe removal of drives etc, as the drives are in the xt2 format they aren't journalled and thus aren't protected against such issues. Drives are now working beautifully, thanks all! :D

    Read the article

  • NetBeans Java Hints: Quick & Dirty Guide

    - by Geertjan
    In NetBeans IDE 7.2, a new wizard will be found in the "Module Development" category in the New File dialog, for creating new Java Hints. Select a package in a NetBeans module project. Right click, choose New/Other.../Module Development/Java Hint: You'll then see this: Fill in: Class Name: the name of the class that should be generated. E.g. "Example". Hint Display Name: the display name of the hint itself (as will appear in Tools/Options). E.g. "Example Hint". Warning Message: the warning that should be produced by the hint. E.g. "Something wrong is going on". Hint Description: a longer description of the hint, will appear in Tools/Options and eventually some other places. E.g. "This is an example hint that warns about an example problem." Will also provide an Automatic Fix: whether the hint will provide some kind of transformation. E.g. "yes". Fix Display Name: the display name of such a fix/transformation. E.g. "Fix the problem". Click Finish. Should generate "Example.java", the hint itself: import com.sun.source.util.TreePath; import org.netbeans.api.java.source.CompilationInfo; import org.netbeans.spi.editor.hints.ErrorDescription; import org.netbeans.spi.editor.hints.Fix; import org.netbeans.spi.java.hints.ConstraintVariableType; import org.netbeans.spi.java.hints.ErrorDescriptionFactory; import org.netbeans.spi.java.hints.Hint; import org.netbeans.spi.java.hints.HintContext; import org.netbeans.spi.java.hints.JavaFix; import org.netbeans.spi.java.hints.TriggerPattern; import org.openide.util.NbBundle.Messages; @Hint(displayName = "DN_com.bla.Example", description = "DESC_com.bla.Example", category = "general") //NOI18N @Messages({"DN_com.bla.Example=Example Hint", "DESC_com.bla.Example=This is an example hint that warns about an example problem."}) public class Example { @TriggerPattern(value = "$str.equals(\"\")", //Specify a pattern as needed constraints = @ConstraintVariableType(variable = "$str", type = "java.lang.String")) @Messages("ERR_com.bla.Example=Something wrong is going on") public static ErrorDescription computeWarning(HintContext ctx) { Fix fix = new FixImpl(ctx.getInfo(), ctx.getPath()).toEditorFix(); return ErrorDescriptionFactory.forName(ctx, ctx.getPath(), Bundle.ERR_com.bla_Example(), fix); } private static final class FixImpl extends JavaFix { public FixImpl(CompilationInfo info, TreePath tp) { super(info, tp); } @Override @Messages("FIX_com.bla.Example=Fix the problem") protected String getText() { return Bundle.FIX_com_bla_Example(); } @Override protected void performRewrite(TransformationContext ctx) { //perform the required transformation } } } Should also generate "ExampleTest.java", a test for it. Unfortunately, the wizard infrastructure is not capable of handling changes related to test dependencies. So the ExampleTest.java has a todo list at its begining: /* TODO to make this test work:  - add test dependency on Java Hints Test API (and JUnit 4)  - to ensure that the newest Java language features supported by the IDE are available,   regardless of which JDK you build the module with:  -- for Ant-based modules, add "requires.nb.javac=true" into nbproject/project.properties  -- for Maven-based modules, use dependency:copy in validate phase to create   target/endorsed/org-netbeans-libs-javacapi-*.jar and add to endorseddirs   in maven-compiler-plugin configuration  */Warning: if this is a project for which tests never existed before, you may need to close&reopen the project, so that "Unit Test Libraries" node appears - a bug in apisupport projects, as far as I can tell.  Thanks to Jan Lahoda for the above rough guide.

    Read the article

  • Can't fix broken packages

    - by AWE
    I am too dumb but determined to use Ubuntu that I paid a professional to install it for me (dualboot 11.10 with Win7). When I came home I got a lot of things from the software center. Skype did not have a download button so I googled it and Ubuntu help told me to do this: sudo add-apt-repository "deb http://archive.canonical.com/ $(lsb_release -sc) partner" and then this: sudo apt-get update && sudo apt-get install skype The terminal told me "that this is potentially harmful..." but I thought it was Ubuntu language meaning "are you sure?" Now the computer is mute. Items cannot be installed or removed until the package catalog is repaired, so I want to repair it but the package operation fails. "sudo aptitude -f install" - command not found Synaptic package manager tells me that I have two broken packages, libc6 and libc6-dev so I do this: sudo apt-get update && sudo apt-get upgrade which tells me to do this: sudo apt-get -f install that ends up like this: Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) When fixing broken packages in synaptic package manager I get this: Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) A package failed to install. Trying to recover: dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. I want to become a linux geek but it is harder than I thought. Please help!

    Read the article

  • How to install chrome autosave extension?

    - by Oguz Can Sertel
    I would like to install chrome autosave plugin on ubuntu. when I try to install it with these steps https://github.com/NV/chrome-devtools-autosave-server , I got some errors... there was not installed node and npm out of box on ubuntu 12.10. So I installed npm and node with these commands. sudo apt-get install npm sudo apt-get install node and I tried to install autosave here is the output: sudo npm install -g autosave npm http GET https://registry.npmjs.org/autosave npm http 304 https://registry.npmjs.org/autosave npm http GET https://registry.npmjs.org/commander npm http 304 https://registry.npmjs.org/commander /usr/local/bin/autosave -> /usr/local/lib/node_modules/autosave/bin/autosave > [email protected] install /usr/local/lib/node_modules/autosave > node ./scripts/install.js npm ERR! error installing [email protected] npm WARN This failure might be due to the use of legacy binary "node" npm WARN For further explanations, please read npm WARN /usr/share/doc/nodejs/README.Debian npm WARN npm ERR! [email protected] install: `node ./scripts/install.js` npm ERR! `sh "-c" "node ./scripts/install.js"` failed with 1 npm ERR! npm ERR! Failed at the [email protected] install script. npm ERR! This is most likely a problem with the autosave package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node ./scripts/install.js npm ERR! You can get their info via: npm ERR! npm owner ls autosave npm ERR! There is likely additional logging output above. npm ERR! npm ERR! System Linux 3.5.0-17-generic npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "autosave" npm ERR! cwd /home/naczu npm ERR! node -v v0.6.19 npm ERR! npm -v 1.1.4 npm ERR! code ELIFECYCLE npm ERR! message [email protected] install: `node ./scripts/install.js` npm ERR! message `sh "-c" "node ./scripts/install.js"` failed with 1 npm ERR! errno {} npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/naczu/npm-debug.log npm not ok and here is README.debian nodejs for Debian ================= packaged modules ---------------- The global search path for modules is /usr/lib/nodejs Future packages of node modules will use that directory, so it should be used wisely. user modules ------------ Node looks for modules in ./node_modules directory first; please read node#modules documentation carefully for more information. Node does not look for modules in /usr/local/lib/node_modules, where npm put them. Please read npm-link(1) of npm package, to understand how to properly use npm-installed modules in a project. Note that require.paths is not supported in future node versions. See also node(1) for more information about NODE_PATH. nodejs command -------------- The upstream name for the Node.js interpreter command is "node". In Debian the interpreter command has been changed to "nodejs". This was done to prevent a namespace collision: other commands use the same name in their upstreams, such as ax25-node from the "node" package. Scripts calling Node.js as a shell command must be changed to instead use the "nodejs" command.

    Read the article

  • My First Weeks at Red Gate

    - by Jess Nickson
    Hi, my name’s Jess and early September 2012 I started working at Red Gate as a Software Engineer down in The Agency (the Publishing team). This was a bit of a shock, as I didn’t think this team would have any developers! I admit, I was a little worried when it was mentioned that my role was going to be different from normal dev. roles within the company. However, as luck would have it, I was placed within a team that was responsible for the development and maintenance of Simple-Talk and SQL Server Central (SSC). I felt rather unprepared for this role. I hadn’t used many of the technologies involved and of those that I had, I hadn’t looked at them for quite a while. I was, nevertheless, quite excited about this turn of events. As I had predicted, the role has been quite challenging so far. I expected that I would struggle to get my head round the large codebase already in place, having never used anything so much as a fraction of the size of this before. However, I was perhaps a bit naive when it came to how quickly things would move. I was required to start learning/remembering a number of different languages and technologies within time frames I would never have tried to set myself previously. Having said that, my first week was pretty easy. It was filled with meetings that were designed to get the new starters up to speed with the different departments, ideals and rules within the company. I also attended some lightning talks being presented by other employees, which were pretty useful. These occur once a fortnight and normally consist of around four speakers. In my spare time, we set up the Simple-Talk codebase on my computer and I started exploring it and worked on my first feature – redirecting requests for URLs that used incorrect casing! It was also during this time that I was given my first introduction to test-driven development (TDD) with Michael via a code kata. Although I had heard of the general ideas behind TDD, I had definitely never tried it before. Indeed, I hadn’t really done any automated testing of code before, either. The session was therefore very useful and gave me insights as to some of the coding practices used in my team. Although I now understand the importance of TDD, it still seems odd in my head and I’ve yet to master how to sensibly step up the functionality of the code a bit at a time. The second week was both easier and more difficult than the first. I was given a new project to work on, meaning I was no longer using the codebase already in place. My job was to take some designs, a WordPress theme, and some initial content and build a page that allowed users of the site to read provided resources and give feedback. This feedback could include their thoughts about the resource, the topics covered and the page design itself. Although it didn’t sound the most challenging of projects when compared to fixing bugs in our current codebase, it nevertheless provided a few sneaky problems that had me stumped. I really enjoyed working on this project as it allowed me to play around with HTML, CSS and JavaScript; all things that I like working with but rarely have a chance to use. I completed the aims for the project on time and was happy with the final outcome – though it still needs a good designer to take a look at it! I am now into my third week at Red Gate and I have temporarily been pulled off the website from week 2. I am again back to figuring out the Simple-Talk codebase. Monday provided me with the chance to learn a bunch of new things: system level testing, Selenium and Python. I was set the challenge of testing a bug fix dealing with the search bars in Simple-Talk. The exercise was pretty fun, although Mike did have to point me in the right direction when I started making the tests a bit too complex. The rest of the week looks set to be focussed on pair programming with Mike as we work together on a new feature. I look forward to the challenges that still face me and hope that I will be able to get up to speed quickly. *fingers crossed*

    Read the article

  • Configure Jenkins and Tomcat using Puppet on Vagrant

    - by ex3v
    I'm playing with setting up my first Spring + jenkins + Tomcat CI dev environment. For now it's just a test/fun phase, but in the near future I'll be starting new project with my coworkers. That's the reason that I want development environment virtualized and exactly te same on every development machine, as well as on production server. I choosen to use Vagrant and to try to write puppet scripts that not only install everything, but also configure everything so each of us will have the same jenkins plugins, same jenkins and tomcat login and password, and literally after calling vagrant up we are ready to work. What I managed to do so far is installation of stuff needed and port forwarding. My vagrantfile looks like this (comments stripped): VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "precise32" config.vm.box_url = "http://files.vagrantup.com/precise32.box" config.vm.network :forwarded_port, guest: 80, host: 8090 config.vm.network :forwarded_port, guest: 8080, host: 8091 config.vm.network :private_network, ip: "192.168.33.10" config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/" puppet.manifest_file = "default.pp" puppet.options = ['--verbose'] end end And this is my puppet file: Exec { path => [ "/bin/", "/sbin/" , "/usr/bin/", "/usr/sbin/" ] } class system-update { exec { 'apt-get update': command => 'apt-get update', } $sysPackages = [ "build-essential" ] package { $sysPackages: ensure => "installed", require => Exec['apt-get update'], } } class tomcat { package { "tomcat": ensure => present, require => Class["system-update"], } service { "tomcat": ensure => "running", require => Package["tomcat"], } } class jenkins { package { "jenkins": ensure => present, require => Class["system-update"], } service { "jenkins": ensure => "running", require => Package["jenkins"], } } include system-update include tomcat include jenkins Now, when I hit vagrant provision and go to http://localhost:8091/ I can see jenkins running, so above script works good. Next step is configurating jenkins and tomcat by extending above puppet scripts. I'm pretty green when it comes to CI. After wandering around web I've found few tutorials about jenkins configuration (here's one of them). I really want to move configuration presented in this tutorial to puppet file, so when I spread my vagrantfile and puppet file between my coworkers, I will be sure that everyone has exactly te same setup. Unfortunately I'm also green about using puppet, I don't know how to do this. Any help will be apreciated.

    Read the article

  • New Rapid Install StartCD 12.2.0.48 for EBS 12.2 Now Available

    - by Max Arderius
    A new Rapid Install startCD (Patch 18086193) for Oracle E-Business Suite Release 12.2 is now available. We recommend that all EBS customers installing or upgrading to EBS 12.2 use this latest update. The startCD updates are distributed to customers via My Oracle Support Patch which can be uncompressed on top of any previous 12.2 startCD under the main staging area. This patch replaces any previous startCDs. What's New in This Update? This new startCD version 12.2.0.48 includes important fixes for multi-node Installs, RAC, pre-install checks, platform specific issues, and upgrade scenario failures: 18703814 - QREP:122:RI:ISSUE WITH CHECKOS.CMD 18689527 - QREP:122:RI:ISSUE WITH FNDCORE.DLL SHIPPED AS PART OF R122 PACKAGE 18548485 - QREP1224:4:JAR SIGNER ISSUE DUE TO THE RI UPGRADE AUTOCONFIG CHANGES 18535812 - QREP:1220.48_4: 12.2.0 UPGRADE FILE SYSTEM LAY OUT IS AFFECTING THE DB TABLES 18507545 - WIN: UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING RUNNING DB 18476041 - UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING PRODUCTION DB 18459887 - R12.2 INSTALLATION FAILURE - OPMNCTL: NOT FOUND 18436053 - START CD 48_4 - ISSUES WITH TEMP SPACE CHECK 18424747 - QREP1224.3:ADD SERVER BROWSE BUTTON NOT WORKING 18421132 - *RW-50010: ERROR: - SCRIPT HAS RETURNED AN ERROR: 1 18403700 - QREP122.48:RI:UPGRADE RI PRECHECK HUNG IN SPLIT TIER APPS NODE ( NO SILENT ) 18383075 - ADD VERBOSE OPTION TO RAC VALIDATION 18363584 - UPTAKE INSTALL SCRIPTS FOR XB48_4 18336093 - QREP:122:RI:PATCH FS ADMIN SERVICE RUNNING AFTER RI UPGRADE CONFIGURE MODE 18320278 - QREP:1224.3:PLATFORM SPECIFIC SYNTAX ERRORS WITH DATE COMMAND IN DB CHECKER 18314643 - DISABLE SID=DB_NAME FOR RI UPGRADE FLOW IN RAC 18298977 - RI: EXCEPTION WHILE CLICKING RAC NODES BUTTON ON A NON-RAC SERVER 18286816 - QREP122:STARTCD48_3:TRAVERSING FROM VISION PASSW SCREEN TO PROD 18286371 - QREP122:STARTCD48_3:AMBIGUOUS MESSAGE DURING STAGE AREA CHECK ON HP 18275403 - QREP122:48:RI UPGRADE WITH EOH POST CHECKS HANGS IN SPLIT TIER DB NODE 18270631 - QREP122.48:MULTI-NODE RI USING NON-DEFAULT PASSWORDS NOT WORKING 18266046 - QREP122:48:RI NOT ALLOWING TO IGNORE THE RAC PRE-CHECK FAILURE 18242201 - UPTAKE TXK INSTALL SCRIPTS AND PLATFORMS.ZIP INTO STARTCD XB48_3 18236428 - QREP122.47:RI UPGRADE EXISTING OH FOR NON-DEFAULT APPS PASSWORD NOT WORKING 18220640 - INCONSISTENT DATABASE PORTS DURING EBS 12.2 INSTALLATION FOR STARTCD 12.2.0.47 18138796 - QREP122:47:RI 10.1.2 TECHSTACK NOT WORKING IF WE RUN RI FROM NEW STARTCD LOC 18138396 - TST1220: CONTROL FILE NAMING IN RAPID INSTALL SEEMS TO HAVE ISSUES 18124144 - IMPROVE HANDLING ERRORS FOUND IN CLUVFY LOG DURING PREINSTALL CHECKS 18111361 - VALIDATE ASM DB DATA FILES PATH AS +<DATA GROUP>/<PATH> 18102504 - QREP1220.47_5: UNZIP PANEL DOES NOT CREATE THE CORRECT STAGE 18083342 - 12.2 UPGRADE JAVA.NET.BINDEXCEPTION: CANNOT ASSIGN REQUESTED ADDRESS 18082140 - QREP122:47:RAC DB VALIDATION IS FAILS WITH EXIT STATUS IS 6 18062350 - 12.2.3 UPG: 12.2.0 INSTALLATION LOGS 18050840 - RI: UPGRADE WITH EXISTING RAC OH:SECONDARY DB NODE NAME IS BLANK 18049813 - RAC LOV DEFAULTS NOT SAVED UNLESS "SELECT" IS CLICKED 18003592 - TST1220:ADDITIONAL FREE SPACE CHECK FOR RI NEEDS TO BE CHECKED 17981471 - REMOVE ASM SPACE CHECK FROM RACVALIDATIONS.SH 17942179 - R12.2 INSTALL FAILING AT ADRUN11G.SH WITH ERRORS RW-50004 & RW-50010 17893583 - QREP1220.47:VALIDATION OF O.S IN RAPIDWIZ IN THE DB NODE CONFIGURATION SCREEN 17886258 - CLEANUP FND_NODES DURING UPGRADE FLOW 17858010 - RI POST INSTALL CHECKS (SSH VERIFICATION) STEP IS FAILING 17799807 - GEOHR: 12.2.0 - ERRORS IN RAPIDWIZ AND ADCONFIG LOGS 17786162 - QREP1223.4:RI:SERVICE_NAMES IS PRINTED AS SERVICE_NAME IN RI SCREEN 17782455 - RI: CONFIRM DEFAULT APPS PASSWORD IN SILENT MODE KICKOFF 17778130 - RI:ADMIN SERVER TO BE UP ON PRIMARY MID-TIER IN MULTI-NODE UPGRADE FS CREATION 17773989 - UN-SUPPORTED PLATFORM SHOWS 32 BIT AS HARD-CODED 17772655 - RELEVANT MESSAGE DURING THE RAPDIWIZ -TECHSTACK 17759279 - VERIFICATION PANEL DOES NOT EXPAND TECHNOLOGY STACK 17759183 - BUILDSTAGE SCRIPT MENU NEEDS TO BE ADJUSTED 17737186 - DATABASE PRE-REQ CHECK INCORRECTLY REPORTS SUCCESS ON AIX 17708082 - 12.2 INSTALLATION - OS PRE-REQUISITES CHECK 17701676 - TST122: GENERATE WRONG S_DBSID FOR PATCH FILE SYSTEM AT PHASE PREPARE 17630972 - /TMP PRE-REQ INSTALLATION CHECK 17617245 - 12.2 VISION INSTALL FAILS ON AIX 17603342 - OMCS: DB STAGING COMPLAINS WHILE MOVING IT TO FINAL LOCATION 17591171 - OMCS: DB STAGING FAILS WITH FRESH INSTALL R12.2 17588765 - CHECKER VERSION AND PLUGIN VERSION 17561747 - BUILDSTAGE.SH FAILS WITH ERROR WHEN STAGE HOSTED ON 32BIT LINUX 17539198 - RAPID INSTALL NEEDS TO IGNORE NON-REQUIRED STAGE ELEMENTS 17272808 - APPS USERS THAT HAVE DEFAULT PASSWORD AFTER 12.2 RAPID INSTALL References 12.2 Documentation Library 1581299.1 : EBS 12.2 Product Information Center 1320300.1 : Oracle E-Business Suite Release Notes, Release 12.2 1606170.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for Release 12.2.3 1624423.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for R12.TXK.C.Delta.4 and R12.AD.C.Delta.4 1594274.1 : Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes Related Articles Oracle E-Business Suite 12.2 Now Available startCD options to install Oracle E-Business Suite Release 12.2

    Read the article

  • How do I set up nvidia graphics adapter to put out 1080p, it seems to be using interlace mode>

    - by keepitsimpleengineer
    After upgrading to 12.04, my mythbuntu client/server seems to be running in 1080i, the clue comes from: [ 1176.117] (II) NVIDIA(0): Setting mode "1920x1080_60i" [ 1231.340] (II) NVIDIA(0): Setting mode "DFP-1:1920x1080_60@1920x1080+0+0" This is from Xorg.0.log. This whole thing started from video tearing when watching Mythtv recordings. It didn't happen in 10.10. Should I use "TVStandard" "HD1080p" in the screen section since this is a dedicated HTPC? It only connects to an HDTV (1080p) via hdmi. Here is the current xorg.conf file: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Keyboard0" "CoreKeyboard" # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" FontPath "unix/:7100" EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Mouse0" # Driver "mouse" # Option "Protocol" "auto" # Option "Device" "/dev/psaux" # Option "Emulate3Buttons" "no" # Option "ZAxisMapping" "4 5" #EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Keyboard0" # Driver "kbd" #EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "SAMSUNG" HorizSync 26.0 - 81.0 VertRefresh 24.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 240" Option "TripleBuffer" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection After a little digging, the question changes slightly, to wit... Per Chapter 19 of nvidia README... "If the EDID for the display device reported a preferred mode timing, and that mode timing is considered a valid mode, then that mode is used as the "nvidia-auto-select" mode." The EDID for my HDMI connected LCD monitor says use first device as preferred. Prefer first detailed timing : Yes Also: (--) NVIDIA(0): EDID maximum pixel clock : 230.0 MHz The list: (from startx -- -verbose 6 ) (--) NVIDIA(0): Detailed Timings: (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 148.50 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1089, 1125 (--) NVIDIA(0): H/V Polarity : +/+ This is the actual mode selected: (from xorg.0.log) (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 74.18 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1094, 1124 (--) NVIDIA(0): H/V Polarity : +/+ (--) NVIDIA(0): Extra : Interlaced (--) NVIDIA(0): CEA Format : 5 So my HTPC is down-converting to 1080i and then the Monitor is up-converting to 1080p How can I fix this, please?

    Read the article

  • Upgrading Code from 2007 to 2010

    - by MOSSLover
    So I’ve been doing some upgrades just to see if things will work from 2007 to 2010.  So far most of the stuff I want works, but obviously there are some things that break.  Did you guys know that in 2007 you could add a webpart to the view pages for lists and libraries without losing the toolbar?  In 2010 the ribbon disappears every time you add a webpart.  So if you are using Scot Hillier’s Codeplex project to hide buttons it will not work the same way, because the ribbon is going to disappear altogether. I have also learned another reason why standalone installations are the bane of my existence.  Nine times out of ten the installation is done using Network Service as the application pool account.  You are wondering why is this bad?  Well, let’s just say the site collection administrator with local admin rights wants to attach the IIS Worker process and debug say a webpart.  Visual Studio 2010 will throw a nasty error that tells you that you are not an administrator.  You will say, but I am an administrator?  I have all the correct group permissions on the server and on SQL and in SharePoint.  Then you will go in and decide let’s add my own admin account just to see if I can attach the debugger and you will notice that works properly.  So the morale of the story is create a separate account on your development environment to run all the SharePoint Services and such.  You don’t need to go all out and create the best practices amount of accounts if it’s just your dev environment.  I would at least create one single account to run all your SharePoint process (Services, SQL, and App Pool).  Also, don’t run a standalone install unless you want to kill kittens (this is a quote from Todd Klindt).  We love kittens they are cute and awesome.  Besides you learn more if you click Complete and just skip standalone.  You will learn how to setup SQL Server 2008 and you will learn how to configure your environment.  It will help you in the long run.  So I have ranted enough for today I figure these are enough tidbits for you this time around.  The two of you who read my blog and I know some of you are friends who don’t understand SharePoint.  I might as well have just done “wahwahwahwah” in Charlie Brown adult speak.  Thanks for reading as usual.  I’ll catch you all when I complain more about the upgrade process and share more tidbits, which will inevitably become a presentation at a conference or two. Technorati Tags: Upgrade Code SharePoint 2007 to 2010,Visuaul Studio 2010,SharePoint 2010

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • How to get my IR remote to work? Lirc can't see it

    - by user1234567
    I'm using Ubuntu 11.10 (amd64) and I'm trying to get my infrared remote control working. The IR device is a part of a DVB-T USB stick (Based on a RTL2832u chip). I'm using these drivers - it's the only way of getting this device to work under 11.10 that I found. It's a big impromevent from previous Ubuntu version, where I had to edit the driver's code. The device works quite great - and the IR part of it works, too. The driver's page says that the code it's in alpha stage, but I'm pretty sure that my issue has nothing to do with that. If, and only if driver's module is loaded with parameter rtl2832u_rc_mode=2 (which means "use NEC protocol for IR") the remote kind of works, I can see this by running cat /dev/.. ../input6 - when I press a button, random letters appear. The remote works just like a keyboard, but keys are totally messed up - when I press '5' the volume goes down, etc. I would like to use Lirc to fix that, but Lirc can't detect my device (i.e. irw shows nothing). I suspect, it's because something gets into control of the device and sets it up as a keyboard. Lirc seems to be working, it's KDE settings module works too, but it just doesn't detect the device. The Lirc page describes this issue, but since 2009 - the last year when that page was updated, Ubuntu moved from HAL (described there) to DeviceKit, rendering provided instruction useless. I had a similar issue with my previous remote, but the keys were not messed up so much - the remote was usable, so I gave up trying to get Lirc working. I tried the answer provided here, but it changed nothing. I also tried forcing lircd to use my device, but this didn't work too: for i in /sys/class/input/input* ; do echo -n "$(basename "$i"): "; cat "$i/name"; done shows input0: Power Button input1: Power Button input2: Logitech Logitech USB Keyboard input3: A4Tech PS/2+USB Mouse input6: IR-receiver inside an USB DVB receiver But when I run: lircd -n --device=name='IR*' as root (also tried with the full name) I always see: lircd-0.9.0[3983]: lircd(default) ready, using /var/run/lirc/lircd lircd-0.9.0[3983]: accepted new client on /var/run/lirc/lircd lircd-0.9.0[3983]: could not get file information for name=IR* lircd-0.9.0[3983]: default_init(): No such file or directory lircd-0.9.0[3983]: Failed to initialize hardware So, how to set up Lirc with devinput driver in such case?

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • Did I lost my RAID again?

    - by BarsMonster
    Hi! A little history: 2 years ago I was really excited to find out that mdadm is so powerful so it even can reshape arrays so you can start with a smaller array and the grow it as you need. I've bought 3x1Tb drives and made RAID-5. It was fine for a year. Then I bought 2x more, and tried to reshape to RAID-6 out of 5 drives, and due to some mess with superblock versions, lost all content. Had to rebuild it from scratch, but 2Tb of data were gone. Yesterday I bought 2 more drives, and this time I had everything: properly built array, UPS. I've disabled write intent map, added 2 new drives as a spare and run a command to grow array to 7-disk. It started working, but speed was ridiculously slow, ~100kb/sec. AFter processing first 37Mb at such an amasing speed, one of old HDDs fails. I properly shutdown PC and disconnected failed drive. After bootup it appeared it recreated intent map as it was still in mdadm config, so I removed it from config and rebooted again. Now all I see is that all mdadm processes deadlocks, and don't do anything. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1937 root 20 0 12992 608 444 D 0 0.1 0:00.00 mdadm 2283 root 20 0 12992 852 704 D 0 0.1 0:00.01 mdadm 2287 root 20 0 0 0 0 D 0 0.0 0:00.01 md0_reshape 2288 root 18 -2 12992 820 676 D 0 0.1 0:00.01 mdadm And all I see in mdstat is: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdb1[1] sdg1[4] sdf1[7] sde1[6] sdd1[0] sdc1[5] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [7/6] [UU_UUUU] [>....................] reshape = 0.0% (37888/976561152) finish=567604147.2min speed=0K/sec I've already tried mdadm 2.6.7, 3.1.4 and 3.2 - nothing helps. Did I lost my data again? Any suggestions how can I make it work? OS is Ubuntu Server 10.04.2... PS. Needless to say that data is unaccessible - I cannot mount /dev/md0 as save the most valuable data. You can see my disappointment - the very specific thing I was excited about failed twice taking 5Tb of my data with it.

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • JFall 2012

    - by Geertjan
    JFall 2012 was over far too soon! Seven tracks going on simultaneously in a great location, with many artifacts reminding me of JavaOne, and nice snacks and drinks afterwards. The day started, as such things always do, with a keynote. Thanks to @royvanrijn for the photo below, I didn't take any myself and without a picture this report might have been too dry: What you see above is Steve Chin riding into the keynote hall on his NightHacking bike. The keynote was interesting, I can't be too complimentary about it, since I was part of it myself. Bert Ertman introduced the day and then Steve Chin took over, together with Sharat Chander, Tom Eugelink, Timon Veenstra, and myself. We had a strict choreography for the keynote, one that would ensure a lot of variation and some unexpected surprises, such as Steve being thrown off the stage a few times by Bert because of mentioning JavaOne too many times, rather than the clearly much cooler JFall. Steve talked about JavaOne and the direction Java is headed in, Sharat talked about JavaME and embedded devices, Steve and Tom did a demo involving JavaFX, I did a Project Easel demo, and Timon from Ordina talked about his Duke's Choice Award winning AgroSense project. I think the Project Easel demo (which I repeated later in a screencast for Parleys arranged by Eugene Boogaart) came across well and several people I spoke to especially like the roundtrip/bi-directional work that can be done, from browser to IDE and back again, very simply and intuitively. (In a long conversation on the drive back home afterwards, the scenario of a designer laying out the UI in HTML and then handing the HTML to a developer for back-end work, a developer who would then find it convenient to open the HTML in a browser and quickly navigate from the browser to the resources within the IDE, was discussed and considered to be extremely interesting and worth considering adopting NetBeans for, for no other reason than that.) Later I attended a session by David Delabassee on Java EE 7, Hans Dockter on Gradle, and Sander Mak on cross-build injection attacks. I was sorry to have missed Martijn Verburg's session, which sounded like it was really fantastic, among others, such as Gerrit Grunwald. I did a session too, entitled "Unlocking the Java EE 6 Platform", which was very well attended, pretty much a full room, and the demo went very smoothly. I talked to many people, e.g., a long time with Hans Dockter about how cool Gradle is and how great the Gradle/NetBeans plugin is turning out to be. I also had a long conversation (and did a demo) with Chris Chedgey, from Structure101, after his session, which was incredibly well attended; very interesting how popular modularity is. I met several people for the first time, as well as some colleagues from past places I've worked at. All in all, it was a great conference, unfortunately too short, which was very well attended (clearly over 1000) people, with several international speakers, as well as international attendees such as Mattias Karlsson, Sweden JUG leader. And, unsurprisingly, I came across NetBeans Platform applications again, none of which I had ever heard of before. In each case, "our fat client application" was mentioned in passing, never as a main application, and never in a context where there are plans for the application to be migrated to the web or mobile, simply because doing so makes no business sense at all. Great times at JFall, looking forward to meeting with some of the people I met again soon.

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • Why I Love Microsoft Development

    - by Brian Lanham
    I've been writing software for a while and recently had an opportunity to broaden my horizons and start developing for iOS. We decided to leverage, as much as possible, our existing skills and use MonoTouch and MonoDevelop by Novell.    For those of you who do not know, Mono is a .NET port originally designed for Linux but adapted for other platforms as well. MonoTouch is a port specifically for building iOS applications using the .NET framework. MonoDroid is a port (in CTP-esque release) for Android.   A MISSING COMPONENT - VISUAL DESIGNER   MonoDevelop lacks one very significant component compared with other tools I am using: NO VISUAL DESIGNER. Instead of using an integrated visual designer, MonoDevelop shells to the Mac OS "Interface Builder".  Since MonoDevelop lets me have a "Visual Studio-esque" feel *and* I get to use C#, AND it's FREE, I am gladly willing to overlook this.  In fact, it's not even a question.  Free?  Sure, I'll take it with no Visual Designer.   In my experiences I've grown from UNIX and DOS to .NET development through many steps. Java/JSP/Servlets; Windows; Web; etc. I've been doing .NET for quite a few years and I guess I just got "comfortable" with the tools.   WHY AM I NOT GETTING IT?   Interface Builder (IB) is amazingly confusing for me. I had the opportunity to speak at the Northern VA Code Camp on 12/11/2010. My presentation was "Getting Started with iOS Development using MonoTouch and C#".    At the visual design part of the presentation, I asked one of the 3 or 4 Mac developers in the room about my confusion with the IB. I don't understand why the "Classes" list includes objects. I don't understand what "File's Owner" is. And, most importantly, WHAT THE HECK IS AN OUTLET AND WHY IS IT NECESSARY?!?!?"   His response to these question (especially Outlets): "They did it wrong."   I'm accustom to a visual designer that creates variables for graphical widgets for me. Not IB. Instead, I have to create "Outlets" manually. I still do not understand why and, the explanation from a seasoned Mac developer is that it's wrong. (He received nods of confirmation from the other Mac devs in the room.)   I LOVE MS DEV   I love development for Microsoft platforms using Microsoft development tools. I love Windows 7. I love Visual Studio 2010. I love SQL Server. Azure, Entity Framework, Active Directory, Office, WCF/WF/WPF, etc. are all designed with integration in mind. They are also all designed with developers in mind.   Steve Ballmer recently ranted "It's the developers!" That's why it is relatively quick to build apps using MS tools. Clearly, MS knows that while we usually enjoy building technology solutions, we are here to make money. And we need tools that accelerate our time to market without compromising the power and quality of our solutions.   So, yeah, I am sucking up I guess. But I love Microsoft Development. Thank you, Microsoft, for providing the plethora of great development tools.    P.S. (but please slow down a bit…I'm having trouble keeping up!)

    Read the article

  • How to force a new Notification in notify-osd to show up without waiting for the earlier one to exit?

    - by Nirmik
    I have made a script(and a .desktop shortcut leading to this script) for starting and stoping xampp... It checks the status of xampp and accordingly either starts or stops xampp. Now i have assigned a notification as soon as the script is started to display "Starting xampp..." or "Stopping xampp..." and then when xampp is started or stopped,it displays "Xampp started..." or "Xampp stopped..." I've used notify-send to show notification as seen in the script below Now the thing is that here,the second notification waits for the 1st one to disappear and then pops up even if xampp has started/stopped. I want the new notification to appear immediately by forcing the earlier one to exit before the completion of its life-cycle. This can be seen to take plce when you activate/deactivate wireless/networking immediately... For example the "Wireless enabled" comes up on selecting enable wireless and if you immediately select disable wireless,the "Wireless disabled" notification comes up without waiting for "Wireless enabled" notification to complete its life-cycle. So how do i achieve this? #!/bin/sh SERVICE='proftpd' if ps ax | grep -v grep | grep $SERVICE /dev/null then notify-send -i /opt/lampp/htdocs/xampp/img/logo-small.gif "Stopping XAMPP..." && gksudo /opt/lampp/lampp stop && notify-send -i /opt/lampp/htdocs/xampp/img/logo- small.gif "XAMPP Stoped." else notify-send -i /opt/lampp/htdocs/xampp/img/logo-small.gif "Starting XAMPP..." && gksudo /opt/lampp/lampp start && notify-send -i /opt/lampp/htdocs/xampp/img/logo-small.gif "XAMPP Started." fi On the man page for notify-send I found --urgency=LEVEL or -u where levels are low, normal, critical. Is this of any use? making it critical? Also I tried it with the command- notify-send -u=critical"Testing" but that dint work...it gives the error- Unknown urgency criticalTesting specified. Known urgency levels: low, normal, critical. or if I give the command notify-send -u=LOW"Testing" it gives me error missing argument to -u Any relation??

    Read the article

  • New Oracles VM RAC template with support for oracle vm 3 built-in

    - by wcoekaer
    The RAC team did it again (thanks Saar!) - another awesome set of Oracle VM templates published and uploaded to My Oracle Support. You can find the main page here. What's special about the latest version of DeployCluster is that it integrates tightly with Oracle VM 3 manager. It basically is an Oracle VM frontend that helps start VMs, pass arguments down automatically and there is absolutely no need to log into the Oracle VM servers or the guests. Once it completes, you have an entire Oracle RAC database setup ready to go. Here's a short summary of the steps : Set up an Oracle VM 3 server pool Download the Oracle VM RAC template from oracle.com Import the template into Oracle VM using Oracle VM Manager repository - import Create a public and private network in Oracle VM Manager in the network tab Configure the template with the right public and private virtual networks Create a set of shared disks (physical or virtual) to assign to the VMs you want to create (for ASM/at least 5) Clone a set of VMs from the template (as many RAC nodes as you plan to configure) With Oracle VM 3.1 you can clone with a number so one clone command for, say 8 VMs is easy. Assign the shared devices/disks to the cloned VMs Create a netconfig.ini file on your manager node or a client where you plan to run DeployCluster This little text file just contains the IP addresses, hostnames etc for your cluster. It is a very simple small textfile. Run deploycluster.py with the VM names as argument Done. At this point, the tool will connect to Oracle VM Manager, start the VMs and configure each one, Configure the OS (Oracle Linux) Configure the disks with ASM Configure the clusterware (CRS) Configure ASM Create database instances on each node. Now you are ready to log in, and use your x node database cluster. x No need to download various products from various websites, click on trial licenses for the OS, go to a Virtual Machine store with sample and test versions only - this is production ready and supported. Software. Complete. example netconfig.ini : # Node specific information NODE1=racnode1 NODE1VIP=racnode1-vip NODE1PRIV=racnode1-priv NODE1IP=192.168.1.2 NODE1VIPIP=192.168.1.22 NODE1PRIVIP=10.0.0.22 NODE2=racnode2 NODE2VIP=racnode2-vip NODE2PRIV=racnode2-priv NODE2IP=192.168.1.3 NODE2VIPIP=192.168.1.23 NODE2PRIVIP=10.0.0.23 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=mydomain.com DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=racnode12-scan SCANIP=192.168.1.50

    Read the article

  • Display Driver Issue on an hp TX2-1160ea Notebook

    - by Sam
    I'm new to Linux and recently switched to Ubuntu 11.04 due to my project requirement. My laptop has been freezing and going to black screen of death when I run anything related to display (Share desktop, stream video, etc). Today I went through the Ubuntu forum to install the appropriate graphic driver and, after doing it, I rebooted my PC. It gave an error before login saying "select the recovery mode" and after clicking OK, it didn't give the same error on reboot but I've lost the 11.04 graphical interface and all I see is the interface of Ubuntu v10 with slow visuals (even scrolling up/down on browser is really slow). For the reference, here's a desktop screenshot so that you can understand the situation. Also the laptop is overheating. How can I fix this problem? How can i get the Ubuntu 11.04 view back? I also tried Google, but couldn't find any issue like this. Some general Information: Laptop: HP TouchSmart TX2-1160ea Processor: AMD Turion TX2 Memory: 4GB OS: Ubuntu 11.04 Some debugging information:: $ report-hw | grep controller lspci -knn: 00:11.0 SATA controller [0106]: ATI Technologies Inc SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] [1002:4391] lspci -knn: 00:14.3 ISA bridge [0601]: ATI Technologies Inc SB7x0/SB8x0/SB9x0 LPC host controller [1002:439d] lspci -knn: 01:05.0 VGA compatible controller [0300]: ATI Technologies Inc RS780M/RS780MN [Radeon HD 3200 Graphics] [1002:9612] lspci -knn: 08:00.0 Network controller [0280]: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] (rev 01) lspci -knn: 09:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 02) And: $ dpkg -l '*fglrx*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-==============-==============-============================================ ii fglrx 2:8.840-0ubunt Video driver for the ATI graphics accelerato ii fglrx-amdcccle 2:8.840-0ubunt Catalyst Control Center for the ATI graphics un fglrx-control <none> (no description available) un fglrx-control- <none> (no description available) ii fglrx-dev 2:8.840-0ubunt Video driver for the ATI graphics accelerato un fglrx-driver <none> (no description available) un fglrx-driver-d <none> (no description available) un fglrx-kernel-s <none> (no description available) un fglrx-modalias <none> (no description available) un xfree86-driver <none> (no description available) un xfree86-driver <none> (no description available) un xorg-driver-fg <none> (no description available) un xorg-driver-fg <none> (no description available) If you need any more information that could help, just ask.

    Read the article

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • Easy Made Easier

    - by dragonfly
        How easy is it to deploy a 2 node, fully redundant Oracle RAC cluster? Not very. Unless you use an Oracle Database Appliance. The focus of this member of Oracle's Engineered Systems family is to simplify the configuration, management and maintenance throughout the life of the system, while offering pay-as-you-grow scaling. Getting a 2-node RAC cluster up and running in under 2 hours has been made possible by the Oracle Database Appliance. Don't take my word for it, just check out these blog posts from partners and end users. The Oracle Database Appliance Experience - Zip Zoom Zoom http://www.fuadarshad.com/2012/02/oracle-database-appliance-experience.html Off-the-shelf Oracle database servers http://normanweaver.wordpress.com/2011/10/10/off-the-shelf-oracle-database-servers/ Oracle Database Appliance – Deployment Steps http://marcel.vandewaters.nl/oracle/database-appliance/oracle-database-appliance-deployment-steps     See how easy it is to deploy an Oracle Database Appliance for high availability with RAC? Now for the meat of this post, which is the first in a series of posts describing tips for making the deployment of an ODA even easier. The key to the easy deployment of an Oracle Database Appliance is the Appliance Manager software, which does the actual software deployment and configuration, based on best practices. But in order for it to do that, it needs some basic information first, including system name, IP addresses, etc. That's where the Appliance Manager GUI comes in to play, taking a wizard approach to specifying the information needed.     Using the Appliance Manager GUI is pretty straight forward, stepping through several screens of information to enter data in typical wizard style. Like most configuration tasks, it helps to gather the required information before hand. But before you rush out to a committee meeting on what to use for host names, and rely on whatever IP addresses might be hanging around, make sure you are familiar with some of the auto-fill defaults for the Appliance Manager. I'll step through the key screens below to highlight the results of the auto-fill capability of the Appliance Manager GUI.     Depending on which of the 2 Configuration Types (Config Type screen) you choose, you will get a slightly different set of screens. The Typical configuration assumes certain default configuration choices and has the fewest screens, where as the Custom configuration gives you the most flexibility in what you configure from the start. In the examples below, I have used the Custom config type.     One of the first items you are asked for is the System Name (System Info screen). This is used to identify the system, but also as the base for the default hostnames on following screens. In this screen shot, the System Name is "oda".     When you get to the next screen (Generic Network screen), you enter your domain name, DNS IP address(es), and NTP IP address(es). Next up is the Public Network screen, seen below, where you will see the host name fields are automatically filled in with default host names based on the System Name, in this case "oda". The System Name is also the basis for default host names for the extra ethernet ports available for configuration as part of a Custom configuration, as seen in the 2nd screen shot below (Other Network). There is no requirement to use these host names, as you can easily edit any of the host names. This does make filling in the configuration details easier and less prone to "fat fingers" if you are OK with these host names. Here is a full list of the automatically filled in host names. 1 2 1-vip 2-vip -scan 1-ilom 2-ilom 1-net1 2-net1 1-net2 2-net2 1-net3 2-net3     Another auto-fill feature of the Appliance Manager GUI follows a common practice of deploying IP Addresses for a RAC cluster in sequential order. In the screen shot below, I entered the first IP address (Node1-IP), then hit Tab to move to the next field. As a result, the next 5 IP address fields were automatically filled in with the next 5 IP addresses sequentially from the first one I entered. As with the host names, these are not required, and can be changed to whatever your IP address values are. One note of caution though, if the first IP Address field (Node1-IP) is filled out and you click in that field and back out, the following 5 IP addresses will be set to the sequential default. If you don't use the sequential IP addresses, pay attention to where you click that mouse. :-)     In the screen shot below, by entering the netmask value in the Netmask field, in this case 255.255.255.0, the gateway value was auto-filled into the Gateway field, based on the IP addresses and netmask previously entered. As always, you can change this value.     My last 2 screen shots illustrate that the same sequential IP address autofill and netmask to gateway autofill works when entering the IP configuration details for the Integrated Lights Out Manager (ILOM) for both nodes. The time these auto-fill capabilities save in entering data is nice, but from my perspective not as important as the opportunity to avoid data entry errors. In my next post in this series, I will touch on the benefit of using the network validation capability of the Appliance Manager GUI prior to deploying an Oracle Database Appliance.

    Read the article

< Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >