Search Results

Search found 32219 results on 1289 pages for 'screen size'.

Page 621/1289 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • Dell Inspiron 1120 Ubuntu Light -> Desktop and now I'm having problems with wifi and suspend

    - by David N. Welton
    I got a Dell Inspiron 1120 which ships with Ubuntu Light, as well as Windows. My wife prefers Ubuntu, but obviously outside of web stuff, you can't do a lot with Light, so I went ahead and installed the Desktop version of Ubuntu (10.10 / maverick). Whereas before it suspended beautifully and connected to wifi networks flawlessly, it now displays the following problems: It seems to suspend ok, but on resume, the screen remains blank, even though the computer appears to wake up again. Wifi doesn't connect. I tried using the suggested proprietary drivers, and those don't seem to change the situation. All in all, a bit frustrating to run into these sorts of "regressions" - does anyone know what sort of drivers and such Ubuntu Light might have shipped with for this computer that made it work so well? Unfortunately, I wiped the disk in order to install the Desktop version of Ubuntu.

    Read the article

  • How to Force Graphics Options in PC Games with NVIDIA, AMD, or Intel Graphics

    - by Chris Hoffman
    PC games usually have built-in graphics options you can change. But you’re not limited to the options built into games — the graphics control panels bundled with graphics drivers allow you to tweak options from outside PC games. For example, these tools allow you to force-enabling antialiasing to make old games look better, even if they don’t normally support it. You can also reduce graphics quality to get more performance on slow hardware. If You Don’t See These Options If you don’t have the NVIDIA Control Panel, AMD Catalyst Control Center, or Intel Graphics and Media Control Panel installed, you may need to install the appropriate graphics driver package for your hardware from the hardware manufacturer’s website. The drivers provided via Windows Update don’t include additional software like the NVIDIA Control Panel or AMD Catalyst Control Center. Drivers provided via Windows Update are also more out of date. If you’re playing PC games, you’ll want to have the latest graphics drivers installed on your system. NVIDIA Control Panel The NVIDIA Control Panel allows you to change these options if your computer has NVIDIA graphics hardware. To launch it, right-click your desktop background and select NVIDIA Control Panel. You can also find this tool by performing a Start menu (or Start screen) search for NVIDIA Control Panel or by right-clicking the NVIDIA icon in your system tray and selecting Open NVIDIA Control Panel. To quickly set a system-wide preference, you could use the Adjust image settings with preview option. For example, if you have old hardware that struggles to play the games you want to play, you may want to select “Use my preference emphasizing” and move the slider all the way to “Performance.” This trades graphics quality for an increased frame rate. By default, the “Use the advanced 3D image settings” option is selected. You can select Manage 3D settings and change advanced settings for all programs on your computer or just for specific games. NVIDIA keeps a database of the optimal settings for various games, but you’re free to tweak individual settings here. Just mouse-over an option for an explanation of what it does. If you have a laptop with NVIDIA Optimus technology — that is, both NVIDIA and Intel graphics — this is the same place you can choose which applications will use the NVIDIA hardware and which will use the Intel hardware. AMD Catalyst Control Center AMD’s Catalyst Control Center allows you to change these options on AMD graphics hardware. To open it, right-click your desktop background and select Catalyst Control Center. You can also right-click the Catalyst icon in your system tray and select Catalyst Control Center or perform a Start menu (or Start screen) search for Catalyst Control Center. Click the Gaming category at the left side of the Catalyst Control Center window and select 3D Application Settings to access the graphics settings you can change. The System Settings tab allows you to configure these options globally, for all games. Mouse over any option to see an explanation of what it does. You can also set per-application 3D settings and tweak your settings on a per-game basis. Click the Add option and browse to a game’s .exe file to change its options. Intel Graphics and Media Control Panel Intel integrated graphics is nowhere near as powerful as dedicated graphics hardware from NVIDIA and AMD, but it’s improving and comes included with most computers. Intel doesn’t provide anywhere near as many options in its graphics control panel, but you can still tweak some common settings. To open the Intel graphics control panel, locate the Intel graphics icon in your system tray, right-click it, and select Graphics Properties. You can also right-click the desktop and select Graphics Properties. Select either Basic Mode or Advanced Mode. When the Intel Graphics and Media Control Panel appears, select the 3D option. You’ll be able to set your Performance or Quality setting by moving the slider around or click the Custom Settings check box and customize your Anisotropic Filtering and Vertical Sync preference. Different Intel graphics hardware may have different options here. We also wouldn’t be surprised to see more advanced options appear in the future if Intel is serious about competing in the PC graphics market, as they say they are. These options are primarily useful to PC gamers, so don’t worry about them — or bother downloading updated graphics drivers — if you’re not a PC gamer and don’t use any intensive 3D applications on your computer. Image Credit: Dave Dugdale on Flickr     

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • What is the best Programming Language for Kiosk Application? [closed]

    - by Jen Lin
    I need your suggestions guys regarding the project I'm planning to create. I want to create a kiosk software/application that is capable to access database in a server. (So, there's a networking here..)Because the information that will be displayed in a Kiosk screen will be coming from a database in other computer. So my problem here is, I don't know which programming language is the best for this kind of application. I'm thinking about using Visual Basic 6.0 since my group is comfortable using this programming language, but I also want to consider the design. I don't like a plain button. Hope to hear from you guys, thanks much :)

    Read the article

  • Can't install Ubuntu on a Z68xP-UD3p board

    - by Carl
    I have tried to install Ubuntu version 10, 11 and 12 64 bit on my Gigabyte Z68XP-UD3P motherboard using live CD and placing the DVD in the drive and starting the machine. When I go to install Ubuntu I get a dark screen and no text at all. When I go with Live CD I reboot and then I go right into the install from the boot menu and still get darkness. Nothing appears. I have a i7 intel CPU with Nvida Gefore GTX 560 Ti video board.

    Read the article

  • Project Kapros: A Custom-Built Workstation Featuring an In-Desk Computer

    - by Jason Fitzpatrick
    While we’ve seen our fair share of case mods, it’s infrequent we see one as polished and built-in as this custom built work station. What started as an IKEA Galant desk, ended as a stunningly executed desk-as-computer build. High gloss paint, sand-blasted plexiglass windows, custom lighting, and some quality hardware all come together in this build to yield a gorgeous setup with plenty of power and style to go around. Hit up the link below for a massive photo album build guide detailing the process from start to finish. Project Kapros: IKEA Galant PC Desk Mod [via Kotaku] How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • How To Enlarge a Virtual Machine’s Disk in VirtualBox or VMware

    - by Chris Hoffman
    When you create a virtual hard disk in VirtualBox or VMware, you specify a maximum disk size. If you want more space on your virtual machine’s hard disk later, you’ll have to enlarge the virtual hard disk and partition. Note that you may want to back up your virtual hard disk file before performing these operations – there’s always a chance something can go wrong, so it’s always good to have backups. However, the process worked fine for us. Image Credit: flickrsven How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Play PlayStation Games on a Rooted Nook Simple Touch

    - by Jason Fitzpatrick
    Just when you feel like you’ve seen it all, some guy comes along and shows you how he can play original PlayStation games on his ebook reader. Check out the video to see the surprisingly full-speed–albeit black and white–graphics in action. The secret sauce in Sean’s cool setup? He’s rooted the device and installed Free PlayStation Emulator (FPSE) on it–along with the NoRefresh hack–to enjoy touch-screen controls and PS emulation. The whole thing is shockingly smooth; once you get past the choppy intro videos, the games run at full speed. [via Hack A Day] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • How To Disable the Charms Bar and Switcher Hot Corners in Windows 8

    - by Chris Hoffman
    Several programs can prevent the app switcher and charms from appearing when you move your mouse to the corners of the screen in Windows 8, but you can do it yourself with this quick registry hack. You can also hide the charms bar and switcher by installing an application like Classic Shell, which will also add a Start menu and let you log directly into the desktop. What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Sound Waves Visualized with a Chladni Plate and Colored Sand [Video]

    - by Jason Fitzpatrick
    This eye catching demonstration combines a Chladni Plate, four piles of colored sand, and a rubber mallet to great effect–watch as the plate vibrates pattern after pattern into the sand. A Chladni Plate, named after physicist Ernst Chladni, is a steel plate that vibrates when rubbed with a rubber ball-style mallet. Different size balls create different frequencies and each frequency creates a different pattern in the sand placed atop the plate. Watch the video above to see how rubber balls, large and small, change the patterns. [via Neatorama] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Can resizing images with css be good?

    - by Echo
    After reading Is CSS resizing of images still a bad idea?, I thought of a similar question. (too similar? should this be closed?) Lets say you need to use 10 different product image sizes throughout your website and you have 20k-30k different product images, should you use 10 different files for each image size? or maybe 5 different files and use css to resize the other 5? Would there ever be combination that would be good? Or should you always make separate image files? If you use css to resize them, you will save on storage (in GBs) but you will have slight increase in bandwidth and slower loading images(but if images are cached, and you show both sizes of the image would you use less bandwidth and have faster loads?) (But of course you wouldn't want to use css to resize images for mobile sites.)

    Read the article

  • Best approach for a database of long strings

    - by gsingh2011
    I need to store questions and answers in a database. The questions will be one to two sentences, but the answers will be long, at least a paragraph, likely more. The only way I know about to do this right now is an SQL database. However, I don't feel like this is a good solution because as far as I've seen, these databases aren't used for data of this type or size. Is this the correct way to go or is there a better way to store this data? Is there a better way than storing raw strings?

    Read the article

  • Mythbuntu initial setup cannt connect to server

    - by Hawke
    I'm really new to linux, and I just installed mythbuntu to a standalone pc, it's all installed ok and I've logged on, and started the setup but I'm having issues. I select language ok, the next screen is database setup, select next but it says can't connect to server and I just loop back. I've done some googling and checked the mysql database password and that is correct, I've also checked that my username belongs to myth tv and it does. Can anyone help? I've tried reinstalling but it doesn't change. Many thanks.

    Read the article

  • So what are zones really?

    - by Bertrand Le Roy
    There is a (not so) particular kind of shape in Orchard: zones. Functionally, zones are places where other shapes can render. There are top-level zones, the ones defined on Layout, where widgets typically go, and there are local zones that can be defined anywhere. These local zones are what you target in placement.info. Creating a zone is easy because it really is just an empty shape. Most themes include a helper for it: Func<dynamic, dynamic> Zone = x => Display(x); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } With this helper, you can create a zone by simply writing: @Zone(Model.Header) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let's deconstruct what's happening here with that weird Lambda. In the Layout template where we are working, the Model is the Layout shape itself, so Model.Header is really creating a new Header shape under Layout, or getting a reference to it if it already exists. The Zone function is then called on that object, which is equivalent to calling Display. In other words, you could have just written the following to get the exact same effect: @Display(Model.Header) The Zone helper function only exists to make the intent very explicit. Now here's something interesting: while this works in the Layout template, you can also make it work from any deeper-nested template and still create top-level zones. The difference is that wherever you are, Model is not the layout anymore so you need to access it in a different way: @Display(WorkContext.Layout.Header) This is still doing the exact same thing as above. One thing to know is that for top-level zones to be usable from the widget editing UI, you need one more thing, which is to specify it in the theme's manifest: Name: Contoso Author: The Orchard Team Description: A subtle and simple CMS themeVersion: 1.1 Tags: business, cms, modern, simple, subtle, product, service Website: http://www.orchardproject.net Zones: Header, Navigation, HomeFeaturedImage, HomeFeaturedHeadline, Messages, Content, ContentAside, TripelFirst, TripelSecond, TripelThird, Footer Local zones are just ordinary shapes like global zones, the only difference being that they are created on a deeper shape than layout. For example, in Content.cshtml, you can find our good old code fro creating a header zone: @Display(Model.Header) The difference here is that Model is no longer the Layout shape, so that zone will be local. The name of that local zone is what you specify in placement.info, for example: <Place Parts_Common_Metadata_Summary="Header:1"/> Now here's the really interesting part: zones do not even know that they are zones, and in fact any shape can be substituted. That means that if you want to add new shapes to the shape that some part has been emitting from its driver for example, you can absolutely do that. And because zones are so barebones as shapes go, they can be created the first time they are accessed. This is what enables us to add shapes into a zone before the code that you would think creates it has even run. For example, in the Layout.cshtml template in TheThemeMachine, the BadgeOfHonor shape is being injected into the Footer zone on line 47, even though that zone will really be "created" on line 168.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Unsatisfied Link Error and missing .so files when starting Eclipse

    - by Keidax
    I upgraded to the 12.04 beta yesterday. Now, when I try to start Eclipse, I get the splash screen and then this error message: An error has occurred. See the log file /home/gabriel/.eclipse/org.eclipse.platform_3.7.0_155965261/configuration/1335382319394.log . The log file says something like this: java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-gtk-3740 in java.library.path no swt-gtk in java.library.path Can't load library: /home/gabriel/.swt/lib/linux/x86_64/libswt-gtk-3740.so Can't load library: /home/gabriel/.swt/lib/linux/x86_64/libswt-gtk.so followed by many more error messages. The /home/gabriel/.swt/lib/linux/x86_64/ directory exists, but is empty. I also tried reinstalling eclipse with no success. Any ideas?

    Read the article

  • FGLRX does not have /usr/lib/libGL.so lib

    - by Paulo Lopes
    I am trying to compile some OpenGL apps from sources but there is no /usr/lib/libGL.so, or /usr/lib/libGL.so.1 or even /usr/lib/libGL.so.1.2. However fglrxinfo says: display: :0.0 screen: 0 OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Mobility Radeon HD 4500 Series OpenGL version string: 3.3.10237 Compatibility Profile Context Which is what i espect and the gears demo also works... How do i get those files? I tried the MESA files and then i got SW rendering which is not what i want...

    Read the article

  • Use WLST to Delete All JMS Messages From a Destination

    - by james.bayer
    I got a question today about whether WebLogic Server has any tools to delete all messages from a JMS Queue.  It just so happens that the WLS Console has this capability already.  It’s available on the screen after the “Show Messages” button is clicked on a destination’s Monitoring tab as seen in the screen shot below. The console is great for something ad-hoc, but what if I want to automate this?  Well it just so happens that the console is just a weblogic application layered on top of the JMX Management interface.  If you look at the MBean Reference, you’ll find a JMSDestinationRuntimeMBean that includes the operation deleteMessages that takes a JMS Message Selector as an argument.  If you pass an empty string, that is essentially a wild card that matches all messages. Coding a stand-alone JMX client for this is kind of lame, so let’s do something more suitable to scripting.  In addition to the console, WebLogic Scripting Tool (WLST) based on Jython is another way to browse and invoke MBeans, so an equivalent interactive shell session to delete messages from a destination would looks like this: D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain\bin>setDomainEnv.cmd D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain>java weblogic.WLST   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   wls:/offline> connect('weblogic','welcome1','t3://localhost:7001') Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'hotspot_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   wls:/hotspot_domain/serverConfig> serverRuntime() Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   wls:/hotspot_domain/serverRuntime> cd('JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0') wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> ls() dr-- DurableSubscribers   -r-- BytesCurrentCount 0 -r-- BytesHighCount 174620 -r-- BytesPendingCount 0 -r-- BytesReceivedCount 253548 -r-- BytesThresholdTime 0 -r-- ConsumersCurrentCount 0 -r-- ConsumersHighCount 0 -r-- ConsumersTotalCount 0 -r-- ConsumptionPaused false -r-- ConsumptionPausedState Consumption-Enabled -r-- DestinationInfo javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=DestinationInfo,items=((itemName=ApplicationName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ModuleName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName openmbean.SimpleType(name=java.lang.Boolean)),(itemName=SerializedDestination,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ServerName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=Topic,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=VersionNumber,itemType=javax.management.op ule-0!Queue-0, Queue=true, SerializedDestination=rO0ABXNyACN3ZWJsb2dpYy5qbXMuY29tbW9uLkRlc3RpbmF0aW9uSW1wbFSmyJ1qZfv8DAAAeHB3kLZBABZTeXN0ZW1Nb2R1bGUtMCFRdWV1ZS0wAAtKTVNTZXJ2ZXItMAAOU3lzdGVtTW9kdWxlLTABAANBbGwCAlb6IS6T5qL/AAAACgEAC0FkbWluU2VydmVyAC2EGgJW+iEuk+ai/wAAAAsBAAtBZG1pblNlcnZlcgAthBoAAQAQX1dMU19BZG1pblNlcnZlcng=, ServerName=JMSServer-0, Topic=false, VersionNumber=1}) -r-- DestinationType Queue -r-- DurableSubscribers null -r-- InsertionPaused false -r-- InsertionPausedState Insertion-Enabled -r-- MessagesCurrentCount 0 -r-- MessagesDeletedCurrentCount 3 -r-- MessagesHighCount 2 -r-- MessagesMovedCurrentCount 0 -r-- MessagesPendingCount 0 -r-- MessagesReceivedCount 3 -r-- MessagesThresholdTime 0 -r-- Name SystemModule-0!Queue-0 -r-- Paused false -r-- ProductionPaused false -r-- ProductionPausedState Production-Enabled -r-- State advertised_in_cluster_jndi -r-- Type JMSDestinationRuntime   -r-x closeCursor Void : String(cursorHandle) -r-x deleteMessages Integer : String(selector) -r-x getCursorEndPosition Long : String(cursorHandle) -r-x getCursorSize Long : String(cursorHandle) -r-x getCursorStartPosition Long : String(cursorHandle) -r-x getItems javax.management.openmbean.CompositeData[] : String(cursorHandle),Long(start),Integer(count) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),Long(messageHandle) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),String(messageID) -r-x getMessage javax.management.openmbean.CompositeData : String(messageID) -r-x getMessages String : String(selector),Integer(timeout) -r-x getMessages String : String(selector),Integer(timeout),Integer(state) -r-x getNext javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x getPrevious javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x importMessages Void : javax.management.openmbean.CompositeData[],Boolean(replaceOnly) -r-x moveMessages Integer : String(java.lang.String),javax.management.openmbean.CompositeData,Integer(java.lang.Integer) -r-x moveMessages Integer : String(selector),javax.management.openmbean.CompositeData -r-x pause Void : -r-x pauseConsumption Void : -r-x pauseInsertion Void : -r-x pauseProduction Void : -r-x preDeregister Void : -r-x resume Void : -r-x resumeConsumption Void : -r-x resumeInsertion Void : -r-x resumeProduction Void : -r-x sort Long : String(cursorHandle),Long(start),String[](fields),Boolean[](ascending)   wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> cmo.deleteMessages('') 2 where the domain name is “hotspot_domain”, the JMS Server name is “JMSServer-0”, the Queue name is “Queue-0” and the System Module is named “SystemModule-0”.  To invoke the operation, I use the “cmo” object, which is the “Current Management Object” that represents the currently navigated to MBean.  The 2 indicates that two messages were deleted.  Combining this WLST code with a recent post by my colleague Steve that shows you how to use an encrypted file to store the authentication credentials, you could easily turn this into a secure automated script.  If you need help with that step, a long while back I blogged about some WLST basics.  Happy scripting.

    Read the article

  • Byobu looks broken in Putty from Windows

    - by TheLQ
    After trying screen for 2 days I already hate it and am now trying byobu. However currently it looks very broken in Putty. I've already fixed the key mapping issue, but this issue isn't specified in the man page or even google: Notice the misplaced position of the list of windows, the broken selector position, the duplication of the last window, the random a in the top right, and the misplaced apply option. You can't see this, but the last option is not selectable. Is there some option in Putty I need to use in order to see this correctly?

    Read the article

  • DIY Standing Desk Sports Super Sturdy Galvanized Pipe Legs

    - by Jason Fitzpatrick
    If you’re looking for a standing desk sturdy enough you can tap dance on it this DIY creation features thick pipe legs and a solid oak desktop. Courtesy of designer Jessica Allen, this standing desk can easily support a bank of monitors, heavy equipment, and even your entire body if need be, thanks to a sturdy galvanized plumbing pipe undercarriage and a 1″ thick oak top. We love the clean lines of the desk but we’d be tempted to clutter them up a little with a tower-rack mounted under the desk or on the inside of the thick pipe legs. Hit up the link below to check out the full build log. Have a cool standing desk (or desk tutorial) to share? Sound off in the comments. Steel Pipe Standing Desk HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full

    Read the article

  • The Best Websites and Software for Brainstorming and Mind Mapping

    - by Lori Kaufman
    A mind map is a diagram that allows you to visually outline information, helping you organize, solve problems, and make decisions. Start with a single idea in the center of the diagram and add associated ideas, words, and concepts connected radially around the central idea. We’ve collected links to websites and software that can help you create mind maps, and collaborate on and share your maps with others. The programs and websites listed here are all either free or have a free option. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • How do I log into bash shell only?

    - by Tom D
    On my home desktop I want to use Ubuntu Unity sometimes and just the bash shell (without any gui) other times. Is it possible to set up a login option where I can choose between using the Unity GUI or just the shell? For example, on the Ubuntu login screen I can choose among Unity, Gnome Shell, XFCE, etc. An option there for just the Bash shell command line would be ideal. I'm not trying to invite "why would you do that" debate here. I have my reasons. Thanks.

    Read the article

  • General input/output error on OpenOffice 3.2

    - by Scott
    I have been working on a research paper in OpenOffice 3.2 Word Processor and I had a slide show OpenOffice 3.2 Presentation. I saved both frequently and they were not very big (no more then 5 pages of text or a few slides). A few days after I last edited and saved them, I tried to open the slide show but I received window saying: General Error. General input/output error When I tried opening the paper, it asked for ASCII Filter Options (font, character, language, and paragraph break). When I set the normal settings, a single blank document opened with nothing on it. I looked at both of the file's properties and the size was 0 or 1.5 KB, and the volume was unknown. I recently installed some updates for Ubuntu 10.10 and everything else in my computer is working normally including other documents and slide shows. Is this a fixable crash?

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >