Search Results

Search found 48592 results on 1944 pages for 'cannot start'.

Page 760/1944 | < Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >

  • October 2013 Cumulative Update for SQL Server 2008 R2

    - by AaronBertrand
    Microsoft has released Cumulative Update #9 for SQL Server 2008 R2 Service Pack 2. KB Article: KB #2887606 17 fixes listed at time of publication Build number is 10.50.4295 Relevant for @@VERSION 10.50.4000 through 10.50.4294 My usual disclaimer: these updates are NOT for SQL Server 2008 (or SQL Server 2012). Only apply to systems where SELECT @@VERSION returns 10.50.xxxx, where xxxx is >= 2500. If xxxx < 2500, you need to start thinking about getting off the RTM branch. Note that no more cumulative...(read more)

    Read the article

  • USB Mouse and Keyboard not working in Linux 4 Tegra

    - by Sijo
    I am a new person in Tegra Linux development. I have Tamontem NG Evaluation board with Tegra 3 Chip. I installed L4T sample file system from NVIDIA tegra Resources (https://developer.nvidia.com/linux-tegra) and installed the file system as described in the documentation provided in NVIDIA site. Already these was an SD card with L4T running. i dont want to change the boot loader. So I copied the boot.scr.uimg to root (/) folder and uImage to boot(/boot/) and it starts booting from the existing SD card. After that while booting, some errors occurred in some Bluetooth devices (there is no bluetooth device in the board). So I disabled Bluetooth by giving the following command sudo mv /etc/init/bluetooth.conf /etc/init/bluetooth.conf.noexec Now the problem is that mouse and keyboard are not working. So i cannot login. Even though i installed desktop, the mouse and keyboard are not working. But mouse and keyboard are enumerating. lsusb command is showing the USB mouse and keyboard. The installed file system is Ubuntu 13.04. Linux Kernel version is 3.1 What to do. Please help.Thanks in Advance.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Doing your first mock with JustMock

    In this post, i will start with a  more traditional mocking example that  includes a fund transfer scenario between two different currency account using JustMock.Our target interface that we will be mocking looks similar to: public interface ICurrencyService { float GetConversionRate(string fromCurrency, string toCurrency); } Moving forward the SUT or class that will be consuming the  service and will be invoked by user [provided that the ICurrencyService will be passed...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.NET 3.5 Debugging Using Visual Web Developer Express 2008

    One of the most important features in Visual Web Developer Express 2 8 in developing ASP.NET 3.5 websites is the debugging feature. Having a debugger is important in troubleshooting source code and application-related problems. It will save you a lot of time if you encounter and fix problems during the design and testing stage. This article is all about basic debugging in ASP.NET using Visual Web Developer Express its information will provide you with an important tool for designing and creating ASP.NET websites.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • Solaris Non global Zone x server

    - by ankimal
    I am not even sure if this is possible but how can I start an X server on a non-global zone? If I run startx from within my zone. I created the xorg.conf by running /usr/X11/bin/xorgconfig root@foo:/usr/X11/bin# startx xauth: creating new authority file /root/.serverauth.20957 X.Org X Server 1.5.3 Release Date: 5 November 2008 X Protocol Version 11, Revision 0 Build Operating System: SunOS 5.11 snv_108 i86pc Current Operating System: SunOS dsol101 5.11 snv_111b i86pc Build Date: 07 May 2009 04:44:56PM Solaris ABI: 64-bit SUNWxorg-server package version: 6.9.0.5.11.11100,REV=0.2009.05.07 SUNWxorg-mesa package version: 6.9.0.5.11.11100,REV=0.2009.04.02 Before reporting problems, check http://sunsolve.sun.com/ to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 10 19:17:53 2009 (==) Using config file: "/etc/X11/xorg.conf" Fatal server error: xf86OpenConsole: Cannot open /dev/fb (No such file or directory)

    Read the article

  • Use WLST to Delete All JMS Messages From a Destination

    - by james.bayer
    I got a question today about whether WebLogic Server has any tools to delete all messages from a JMS Queue.  It just so happens that the WLS Console has this capability already.  It’s available on the screen after the “Show Messages” button is clicked on a destination’s Monitoring tab as seen in the screen shot below. The console is great for something ad-hoc, but what if I want to automate this?  Well it just so happens that the console is just a weblogic application layered on top of the JMX Management interface.  If you look at the MBean Reference, you’ll find a JMSDestinationRuntimeMBean that includes the operation deleteMessages that takes a JMS Message Selector as an argument.  If you pass an empty string, that is essentially a wild card that matches all messages. Coding a stand-alone JMX client for this is kind of lame, so let’s do something more suitable to scripting.  In addition to the console, WebLogic Scripting Tool (WLST) based on Jython is another way to browse and invoke MBeans, so an equivalent interactive shell session to delete messages from a destination would looks like this: D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain\bin>setDomainEnv.cmd D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain>java weblogic.WLST   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   wls:/offline> connect('weblogic','welcome1','t3://localhost:7001') Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'hotspot_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   wls:/hotspot_domain/serverConfig> serverRuntime() Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   wls:/hotspot_domain/serverRuntime> cd('JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0') wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> ls() dr-- DurableSubscribers   -r-- BytesCurrentCount 0 -r-- BytesHighCount 174620 -r-- BytesPendingCount 0 -r-- BytesReceivedCount 253548 -r-- BytesThresholdTime 0 -r-- ConsumersCurrentCount 0 -r-- ConsumersHighCount 0 -r-- ConsumersTotalCount 0 -r-- ConsumptionPaused false -r-- ConsumptionPausedState Consumption-Enabled -r-- DestinationInfo javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=DestinationInfo,items=((itemName=ApplicationName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ModuleName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName openmbean.SimpleType(name=java.lang.Boolean)),(itemName=SerializedDestination,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ServerName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=Topic,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=VersionNumber,itemType=javax.management.op ule-0!Queue-0, Queue=true, SerializedDestination=rO0ABXNyACN3ZWJsb2dpYy5qbXMuY29tbW9uLkRlc3RpbmF0aW9uSW1wbFSmyJ1qZfv8DAAAeHB3kLZBABZTeXN0ZW1Nb2R1bGUtMCFRdWV1ZS0wAAtKTVNTZXJ2ZXItMAAOU3lzdGVtTW9kdWxlLTABAANBbGwCAlb6IS6T5qL/AAAACgEAC0FkbWluU2VydmVyAC2EGgJW+iEuk+ai/wAAAAsBAAtBZG1pblNlcnZlcgAthBoAAQAQX1dMU19BZG1pblNlcnZlcng=, ServerName=JMSServer-0, Topic=false, VersionNumber=1}) -r-- DestinationType Queue -r-- DurableSubscribers null -r-- InsertionPaused false -r-- InsertionPausedState Insertion-Enabled -r-- MessagesCurrentCount 0 -r-- MessagesDeletedCurrentCount 3 -r-- MessagesHighCount 2 -r-- MessagesMovedCurrentCount 0 -r-- MessagesPendingCount 0 -r-- MessagesReceivedCount 3 -r-- MessagesThresholdTime 0 -r-- Name SystemModule-0!Queue-0 -r-- Paused false -r-- ProductionPaused false -r-- ProductionPausedState Production-Enabled -r-- State advertised_in_cluster_jndi -r-- Type JMSDestinationRuntime   -r-x closeCursor Void : String(cursorHandle) -r-x deleteMessages Integer : String(selector) -r-x getCursorEndPosition Long : String(cursorHandle) -r-x getCursorSize Long : String(cursorHandle) -r-x getCursorStartPosition Long : String(cursorHandle) -r-x getItems javax.management.openmbean.CompositeData[] : String(cursorHandle),Long(start),Integer(count) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),Long(messageHandle) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),String(messageID) -r-x getMessage javax.management.openmbean.CompositeData : String(messageID) -r-x getMessages String : String(selector),Integer(timeout) -r-x getMessages String : String(selector),Integer(timeout),Integer(state) -r-x getNext javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x getPrevious javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x importMessages Void : javax.management.openmbean.CompositeData[],Boolean(replaceOnly) -r-x moveMessages Integer : String(java.lang.String),javax.management.openmbean.CompositeData,Integer(java.lang.Integer) -r-x moveMessages Integer : String(selector),javax.management.openmbean.CompositeData -r-x pause Void : -r-x pauseConsumption Void : -r-x pauseInsertion Void : -r-x pauseProduction Void : -r-x preDeregister Void : -r-x resume Void : -r-x resumeConsumption Void : -r-x resumeInsertion Void : -r-x resumeProduction Void : -r-x sort Long : String(cursorHandle),Long(start),String[](fields),Boolean[](ascending)   wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> cmo.deleteMessages('') 2 where the domain name is “hotspot_domain”, the JMS Server name is “JMSServer-0”, the Queue name is “Queue-0” and the System Module is named “SystemModule-0”.  To invoke the operation, I use the “cmo” object, which is the “Current Management Object” that represents the currently navigated to MBean.  The 2 indicates that two messages were deleted.  Combining this WLST code with a recent post by my colleague Steve that shows you how to use an encrypted file to store the authentication credentials, you could easily turn this into a secure automated script.  If you need help with that step, a long while back I blogged about some WLST basics.  Happy scripting.

    Read the article

  • Fullscreen Video stutters on second monitor laptop

    - by nobrandheroes
    Fullscreen video on my new 1080p monitor is choppy when it comes from my laptop. The same video plays when not full screen. This goes for all video(Flash/MKV, etc), regardless of video resolution. I have an ATI Mobility Radeon HD 4200 Series card in my Thinkpad Edge, Turion X2 2GHz. The computer plays 1080p fine. Things I've tried: Updating Drivers Switching cables Turning Hardware Acceleration Changing video players process priority Rebooting Turning of laptop screen Turning off unused processes Nothing Works. What is the likelyhood that my laptop cannot power a 1920x1280 display?

    Read the article

  • How to increase video memory in libvirt/KVM gui?

    - by Dejan
    In the 'Virtual Hardware details', it lists the model as 'cirrus' with 9MB of RAM. The RAM field cannot be changed, but how to increate the video RAM? My host OS is RH6 and gust OS is Fedora16. EDIT: From guest OS, when I run xvinfo it displays 'no adaptors present'. I was trying to play a video using gstreamers xvimagesink plugin (XFree86 video output plugin using Xv extension). The problem is that xvimagesink is using hardware acceleration for video performance and hence the error Could not initialize Xv output. I guess I'll have to configure hardware acceleration for the guest.

    Read the article

  • Serving Meteor on main domain and Apache on subdomain independently

    - by kinologik
    I'm running a Meteor server on my Ubuntu server. But problems arise when I try to have Apache serving a subdomain on the same server. main.domain.com - Meteor sub.domain.com - Apache Meteor is running on port 80. I have previously tried to have Meteor run on port 3000 and served in reverse proxy with Nginx, but Meteor started to behave badly (tcp/websockets issues) and I spent too many evenings and nights to persist for my own sake. So I reverted my setup to have Meteor being the main server (app works fine), and then install Apache the serve my subdomain. The problem is I cannot have Apache serve on port 80 too since it seems to overrun my Meteor server. From experience, I try to stay away from reverse-proxying Meteor, but I'm not knowledgeable enough to get Apache to dedicate itself to my subdomain and without overwhelming "everything port 80" on my server. How can I have both services behave with each other in this kind of setup?

    Read the article

  • Uninstall ubuntu from external hard drive + remove bootloader from internal hd

    - by Bart Sleeuwen van
    I have created a bootable usb stick with linuxlive and Ubuntu.I then tried to install ubuntu on my external harddrive. However I guess I wrongly picked the drive where the boot section is installed, it now is installed on my internal (C:) drive. When I open the bios bootsequence it says Ubuntu 2 times? As if the boot is installed twice? I'd like to uninstall Ubuntu from my external drive but most important: I don't want Grub to be start up on my PC. I tried booting in Windows 8 recovery mode and used the following commands: bootrec /fixmbr bootrec /fixboot However the ubuntu choices wont disappear? I then tried to add bootrec /scanos and bootrec /rebuildbcd to the commands but somehow this wont work either. Anyone got some ideas? Thank you in advance! Best regards, Bart

    Read the article

  • Snap Server 18000 connection help!

    - by sicko666
    I wonder if anyone here can help me. I have a home server setup made up of old secondhand computers, 2 servers running Windows Server 2003, 1 workstation running Windows 7, a 16 port switch & an adsl ethernet modem. All these connect and talk to each other fine but then I got a "Snap Server 18000" and a "Snap disk 30sa" sata array. When I turn the Snap on, it boots past the BIOS, runs a kernel, then displays: This device cannot be managed via the video/kbd/mouse interface. The video is now disabled. You may access the management functions from your web browser. Only, none of the other PCs detect it, so no browser can find it! I have checked all cables, and all LEDs indicate there's a connection. I have installed the windows "iscsi" and the adaptec "Snap Server Manager" on all PCs but still it's not detected. I don't know what else to do, please advise!

    Read the article

  • How can I generate a 2d navigation mesh in a dynamic environment at runtime?

    - by Stephen
    So I've grasped how to use A* for path-finding, and I am able to use it on a grid. However, my game world is huge and I have many enemies moving toward the player, which is a moving target, so a grid system is too slow for path-finding. I need to simplify my node graph by using a navigational mesh. I grasp the concept of "how" a mesh works (finding a path through nodes on the vertices and/or the centers of the edges of polygons). My game uses dynamic obstacles that are procedurally generated at run-time. I can't quite wrap my head around how to take a plane that has multiple obstacles in it and programatically divide the walkable area up into polygons for the navigation mesh, like the following image. Where do I start? How do I know when a segment of walk-able area is already defined, or worse, when I realize I need to subdivide a previously defined walk-able area as the algorithm "walks" through the map? I'm using javascript in nodejs, if it matters.

    Read the article

  • What's the difference between General Ledger Transfer Program, Create Accounting and Submit Accounting?

    - by Oracle_EBS
    In Release 12, the General Ledger Transfer Program is no longer used. Use Create Accounting or Submit Accounting instead. Submit Accounting spawns the Revenue Recognition Process. The Create Accounting program does not. So if you create transactions with rules, then you would want to run Submit Accounting Process to spawn Revenue Recognition to create the distribution rows, which Create Accounting is then spawned to process to the GL. Create Accounting Submit Accounting Short Name for Concurrent Program XLAACCPB ARACCPB Specific to Receivables No Yes Runs Revenue Recognition automatically No Yes Can be run real-time for one Transaction/Receipt at a time Yes No Spawns the following Programs 1) XLAACCPB module: Create Accounting 2) XLAACCUP module: Accounting Program 3) GLLEZL module: Journal Import 1) ARTERRPM module: Revenue Recognition Master Program 2) ARTERRPW module: Revenue Recognition with parallel workers - could be numerous 3) ARREVSWP - Revenue Contingency Analyzer 4) XLAACCPB module: Create Accounting 5) XLAACCUP module: Accounting Program 5) GLLEZL module: Journal Import Keep in mind, Reports owned by application 'Subledger Accounting' cannot be seen when running the report from Receivables responsibility. You may want to request your sysadmin to attach the following SLA reports/programs to your AR responsibility as you will need these for your AR closing process: XLAPEXRPT : Subledger Period Close Exception Report - shows transactions in status final, incomplete and unprocessed. XLAGLTRN : Transfer Journal Entries to GL - transfers transactions in final status and manually created transactions to GL To add reports/programs owned by application 'Subledger Accounting' (Subledger Period Close Exception Report and Transfer Journal Entries to GL_ Add to the request group as follows: Let's use Subledger Accounting Report XLATBRPT: Open Account Balances Listing Report as an example. Responsibility: System Administrator Navigation: Security > Responsibility > Define Query the name of your Receivables Responsibility and note the Request Group (ie. Receivables All) Navigation: Security > Responsibility > Request Query the Request Group Go to Request Zone and Click on Add Record Enter the following: Type: Program Name: Open Account Balances Listing Save Responsibility: Receivables Manager Navigation: Control > Requests > Run In the list of values you should now see 'Open Account Balances Listing' report References: Note: 748999.1 How to add reports for application subledger accounting to receivables responsibiilty Note: 759534.1 R12 ARGLTP General Ledger Transfer Program Errors Out Note: 1121944.1 Understanding and Troubleshooting Revenue Recognition in Oracle Receivables

    Read the article

  • Windows 7 boot manager issue

    - by L.ppt
    I was having windows 7 installed on my laptop, yesterday I tried to install Open Suse operating system. During its installation I chose a NTFS partition and formatted it to ext4 filesystem. During installation an error came that mount point cannot be created on this partition and I aborted the installation. They on reboot a message came BootMgr is missing. I then reinstalled the windows but on complete installation when setup rebooted the system then a blank screen came with a cursor blinking. I went through many forums and learnt may startup repairs and commands but it continues to hang up at a blank screen with cursor blinking. Reinstalling new windows 7 is also not doing any effect. I urgently need to repair my laptop for very important work. Please Help

    Read the article

  • NGINX : Proxy pass intercepting 5xx errors - Possible to differentiate between ones fired by backed vs ones fired by nginx itself?

    - by anonymous-one
    We use proxy_intercept_errors ( http://wiki.nginx.org/HttpProxyModule#proxy_intercept_errors ) with our backends. We intercept a number of status codes, including a few 5xx ones. Our 5xx (each 500 has its own) handler has an access_log so we can see all the 5xx errors returned to the user in a nice clean logged format. The issue with this is that as it stands now, we cannot tell weather a 5xx was returned to the user by nginx or intercepted from our backend. Is there any way to differentiate between the two? Thanks.

    Read the article

  • 2 pdfs look same on XP, different on Win7

    - by David Dai
    I have 2 pdf files. I compared them with WinMerge, BeyondCompare, and even compared their checksums. They are exactly the same to me in every way. If I open them with Adobe Reader in Xp, and compare them with my bare eyes, they look the same. But!!! If I open them with Adobe Reader in Win7, and compare them with my bare eyes, they look very different!(particularly border width). I'm sorry I cannot share the 2 pdf files but I will appreciate it if anyone could come up with any idea!

    Read the article

  • Procedural... house with rooms generator

    - by pek
    I've been looking at some algorithms and articles about procedurally generating a dungeon. The problem is, I'm trying to generate a house with rooms, and they don't seem to fit my requirements. For one, dungeons have corridors, where houses have halls. And while initially they might seem the same, a hall is nothing more than the area that isn't a room, whereas a corridor is specifically designed to connect one area to another. Another important difference with a house is that you have a specific width and height, and you have to fill the entire thing with rooms and halls, whereas with a dungeon, there is empty space. I think halls in a house is something in between a dungeon corridor (gets you to other rooms) and an empty space in the dungeon (it's not explicitly defined in code). More specifically, the requirements are: There is a set of predefined rooms I cannot create walls and doors on the fly. Rooms can be rotated but not resized Again, because I have a predefined set of rooms, I can only rotate them, not resize them. The house dimensions are set and has to be entirely filled with rooms (or halls) I.e. I want to fill a 14x20 house with the available rooms making sure there is no empty space. Here are some images to make this a little more clear: As you can see, in the house, the "empty space" is still walkable and it gets you from one room to another. So, having said all this, maybe a house is just a really really tightly packed dungeon with corridors. Or it's something easier than a dungeon. Maybe there is something out there and I haven't found it because I don't really know what to search for. This is where I'd like your help: could you give me pointers on how to design this algorithm? Any thoughts on what steps it will take? If you have created a dungeon generator, how would you modify it to fit my requirements? You can be as specific or as generic as you like. I'm looking to pick your brains, really.

    Read the article

  • Roaming profile migration failed using Windows explorer manual copy

    - by Albert Widjaja
    Hi All, I'm at the final stage of migrating an old demoted DC server, now I'm stuck in migrating/copying the roaming profiles of the users from the old win2k server (oldServer1) into the new win2k3 server (newServer1) there are lots of profiles which points into the old server: \\oldServer1\profiles\user1 \\oldServer1\profiles\user2 \\oldServer1\profiles\user3 . . . \\oldServer1\profiles\userN in the ProfilePath I'd like to move it into: \\newServer1\profiles\user1 \\newServer1\profiles\user2 \\newServer1\profiles\user3 . . . \\newServer1\profiles\userN I tried to copy paste from my DOMAIN\Administrator account but it is failed to copy ? i cannot even browse inside the directory of user1 until userN ? is there any fastest way to do the copy process rather than "taking ownership" for each of those directory one by one ? [hopefully by taking ownership the user will still be able to use their profile normally] Thanks.

    Read the article

  • The Best Websites and Software for Brainstorming and Mind Mapping

    - by Lori Kaufman
    A mind map is a diagram that allows you to visually outline information, helping you organize, solve problems, and make decisions. Start with a single idea in the center of the diagram and add associated ideas, words, and concepts connected radially around the central idea. We’ve collected links to websites and software that can help you create mind maps, and collaborate on and share your maps with others. The programs and websites listed here are all either free or have a free option. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • CSS Style Element if it does not contain another specific type of Element [migrated]

    - by Chris S
    My CSS includes the following: #mainbody a[href ^='http'] { background:transparent url('/images/icons/external.svg') no-repeat top right; padding-right: 12px; } This places an "external" icon next to links that start with "http" (all internal site links are relative). Works perfectly except if I link an Image, it also get this icon. For example: <a href='http://example.com'><img src='whatever.jpg'/></a> would also get the "external" icon next to the image. I can live with this if necessary, but would like to eliminate it. This must be implement in CSS (no JS); must not require any special IDs, Classes, styling in the html for the image or anchor around the image. Is this possible?

    Read the article

  • Playing an Online mp3

    - by Mohsen
    I have a problem with playing online mp3s. I'm using latest version of javazoom's jlayer and basicplayer. Here is the exception: Caused by: javazoom.jlgui.basicplayer.BasicPlayerException: java.io.EOFException at javazoom.jlgui.basicplayer.BasicPlayer.initAudioInputStream(Unknown Source) at javazoom.jlgui.basicplayer.BasicPlayer.open(Unknown Source) ... 12 more Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at com.sun.media.sound.WaveFileReader.getFMT(WaveFileReader.java:244) at com.sun.media.sound.WaveFileReader.getAudioFileFormat(WaveFileReader.java:85) at javax.sound.sampled.AudioSystem.getAudioFileFormat(AudioSystem.java:985) at javazoom.jlgui.basicplayer.BasicPlayer.initAudioInputStream(Unknown Source) ... 15 more My java is 1.6.0_16. Certain files cannot be player through the Internet. I have a set of mp3s, playing one after the other. Randomly one mp3 doesn't work throwing above exception. Some mp3s can be played by calling again play() method if javazoom's basicplayer, but others can never be played online. I was able to find this post but I doubt if this really relates to my directx version or something. Mohsen

    Read the article

  • Is flash game development not considered 'proper' game development?

    - by humbleBee
    I've come across this a couple of times. That flash game development is not 'proper' game development when compared to XNA or even Unity. Mentioned here: Need guidelines for studying Game Development Also here in some comments : Where to start with game development? This judgement also befalls java, according to some. Is it because in flash its so easy to draw graphics and to import and add on to the stage any element we want and also because flash needs a 'container program' to run and others don't? But flash is by far way easier to 'distribute' than any other of those mentioned above. Maybe except for iphone or android games.

    Read the article

  • ASA 5510 Need to filter traffic log events to my iPhone

    - by drpcken
    For some reason I cannot update apps or download apps to any ios devices on my network (tried both iphone and ipads). When I'm at home on my own network everything works fine. This started about a week ago. I've configured my iphone with a static IP address and even used 4.2.2.2 as my dns to rule out that the issue is with my DNS Server. I'm looking at the SYSLOG in ASDM (Cisco ASA 5510) but Im not sure it is providing me enough info. It seems to be showing ACL blocks on my public ip address, but not individual client IP's, so I can't see whats going on. How can I setup a way to filter any incoming/outgoing traffic to my iPhone's static IP and try and troubleshoot this?

    Read the article

  • Junior developer support

    - by lady_killer
    I am a junior developer in my first work experience after university. I joined the company as PHP developer but I ended up developing using C# and ASP.NET. Right from the start I did not receive any training in C# and I was assigned with ASP projects with quite tight deadlines scoped by Senior developers. The few project hand overs I had from other developers were brief and it looked like I had to discover the system myself, in really short time. This is my first job as web developer and I wonder whether it is normal not to have a kind of mentor to show me how to do things, especially because I am completely new to the technology. Also, do you have idea how to tackle this? As you can imagine, it gets really frustrating! Thank you!

    Read the article

< Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >