Search Results

Search found 31410 results on 1257 pages for 'disk based'.

Page 728/1257 | < Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >

  • eLearning event on HTML5 for Mobile with jQuery Mobile

    - by Wallym
    I'll be doing an eLearning event on HTML5 for Mobile with jQuery Mobile. There will also be a few items sprinkled in on ASP.NET Razor. Mobile development is a hot item. Customers are buying iPhones, iPads, Android devices, and many other mobile computing devices at an ever increasing record pace. Devices based on iOS and Android are nearly 80 percent of the marketplace. RIM continues to be dominant in the business area across the world. Nokia's growth with Windows Phone will grow on a worldwide basis. At the same time, clearly web development is a tremendous driver of applications, both on the public Internet and on private networks. How can developers target these various mobile platforms with web technologies? Developers can write web applications that take advantage of each mobile platform, but that is a lot of work. Into this space, the jQuery Mobile framework was developed. This eLearning series will provide an overview of mobile web development with jQuery Mobile, a detailed look at what the jQuery Mobile framework provides for us, how we can customize jQuery Mobile, and how we can use jQuery Mobile inside of ASP.NET.Link: http://elearning.left-brain.com/event/mobile-web-development

    Read the article

  • Information regarding Collection 6233 - Implementing and Maintaining Business Intelligence in Micros

    - by Testas
    At the London SQL Server User Group I was asked a number of questions regarding the release of Collection 6233 - Implementing and Maintaining Business Intelligence in Microsoft® SQL Server® 2008: Integration Services, Reporting Services and Analysis Services, which has been authored by myself. Particularly regarding the SSIS component of the collection. Elearning is an interactive training experience that enables you to learn at your own pace. With a variety of learning tools including demonstrations, animations as well as written materials and the addition of labs that enables you to reinforce your learning. Microsoft Elearning can provide a valuable learning tool when you may not have the time to take out of the office to attend a courseThis 24-hour collection provides you with the skills and knowledge required to implement and maintain business intelligence solutions on SQL Server 2008 and also helps students to prepare for Exam 70-448 and you can buy each part individually see: http://www.microsoft.com/learning/elearning/course/6233.mspx   However, you will create a simple data warehouse in this collection and use SSIS to create packages to populate the data warehouse with data, exploring key concepts and tools to faciliatate this. This was a decision thart I took when writing this course based on feedback from hundreds of students who attended Microsoft Official Courses on SSIS. They wanted a course that allowed them to use SSIS to work with a data warehouse. This collection will certainly enable you to explore the options available in SSIS to meet this requirement while at the same time meeting the certification requirements I hope this answers the questions regarding this collection and hope you enjoy this collection   Chris  

    Read the article

  • Good book for a software developer doing part-time (Linux) system administration work

    - by Tony Meyer
    In many smaller organisations, developers often end up doing some system administration work (for obvious reasons). A lot of the time, they have great developer skills, but few system administration skills (perhaps all self-taught), and so have to learn as they go, which is fairly inefficient. Are there canonical (or simply great) books that would help in this situation? More advanced than just using a shell (presumably a developer can do that), but not aimed at someone that hopes to spend many years doing this work. Ideally, something fairly generic (although specific to a distribution would be OK), covering databases, networking, general maintenance, etc, not just one specific task. For the most part, I'm interested in shell-based work (i.e. no GUI installed), although if there's something outstanding I'm missing, please point it out. (As an analogy, replace "system administration" with C, and I'd want K&R, with C++ and I'd want Meyers' "Effective C++").

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big

    - by pinaldave
    After reading my earlier article SQL SERVER – master Database Log File Grew Too Big, I received an email recently from another reader asking why does the log file of model database grow every day when he is not carrying out any operation in the model database. As per the email, he is absolutely sure that he is doing nothing on his model database; he had used policy management to catch any T-SQL operation in the model database and there were none. This was indeed surprising to me. I sent a request to access to his server, which he happily agreed for and within a min, we figured out the issue. He was taking the backup of the model database every day taking the database backup every night. When I explained the same to him, he did not believe it; so I quickly wrote down the following script. The results before and after the usage of the script were very clear. What is a model database? The model database is used as the template for all databases created on an instance of SQL Server. Any object you create in the model database will be automatically created in subsequent user database created on the server. NOTE: Do not run this in production environment. During the demo, the model database was in full recovery mode and only full backup operation was performed (no log backup). Before Backup Script Backup Script in loop DECLARE @FLAG INT SET @FLAG = 1 WHILE(@FLAG < 1000) BEGIN BACKUP DATABASE [model] TO  DISK = N'D:\model.bak' SET @FLAG = @FLAG + 1 END GO After Backup Script Why did this happen? The model database was in full recovery mode and taking full backup is logged operation. As there was no log backup and only full backup was performed on the model database, the size of the log file kept growing. Resolution: Change the backup mode of model database from “Full Recovery” to “Simple Recovery.”. Take full backup of the model database “only” when you change something in the model database. Let me know if you have encountered a situation like this? If so, how did you resolve it? It will be interesting to know about your experience. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CentOS 5.5 APIC issue on ESX 4.1 & ML115

    - by Adnan
    Hi, I've just installed vSphere 4.1 on an HP ProLiant ML115 G5 Quad-core and am trying to install CentOS 5.5 as a guest system. However, when the guest boots up I get a calibrate_APIC_clock warning and a kernel panic message. I've come across this knowledge base article on the vmware website which suggests moving the guest onto another Intel based host (!). Funnily enough I don't have a collection of spare host servers sitting around, so can anyone suggest another solution? Alternatively, would installing an earlier version of CentOS get around this issue, or would a yum update put me back to square one? How about BIOS settings, could anything be tweaked there? Thanks.

    Read the article

  • NFS caching on Ubuntu

    - by stream
    We run a bunch of ubuntu servers (mostly 8.04 LTS) which all mount an nfs share at /nfs. We use the nfs primarily for two purposes: symlinking config files (such as apache vhosts) reading & writing uploaded files This all works great except it makes us fully dependent on the central NFS server (which is a DRBD cluster with heartbeat failover from primary to secondary, but we've still seen issues). What we'd like is if we could mount the NFS through some local caching layer which would make any file which had previously been read remain available even if /nfs isn't. Writes could be disabled for this period. Searching around it looks like cachefilesd may be an option. Unfortunately, it's only packaged for ubuntu 9.10 & 10.04 it looks like. I was also looking for a FUSE-based solution which might fit the bill, but hadn't found anything yet. Any suggestions would be greatly appreciated!

    Read the article

  • PeopleSoft 9.2 Financial Management Training – Now Available

    - by Di Seghposs
    A guest post from Oracle University.... Whether you’re part of a project team implementing PeopleSoft 9.2 Financials for your company or a partner implementing for your customer, you should attend some of the new training courses.  Everyone knows project team training is critical at the start of a new implementation, including configuration training on the core application modules being implemented. Oracle offers these courses to help customers and partners understand the functionality most relevant to complete end-to-end business processes, to identify any additional development work that may be necessary to customize applications, and to ensure integration between different modules within the overall business process. Training will provide you with the skills and knowledge needed to ensure a smooth, rapid and successful implementation of your PeopleSoft applications in support of your organization’s financial management processes - including step-by-step instruction for implementing, using, and maintaining your applications. It will also help you understand the application and configuration options to make the right implementation decisions. Courses vary based on your role in the implementation and on-going use of the application, and should be a part of every implementation plan, whether it is for an upgrade or a new rollout. Here’s some of the roles that should consider training: · Configuration or functional implementers · Implementation Consultants (Oracle partners) · Super Users · Business Analysts · Financial Reporting Specialists · Administrators PeopleSoft Financial Management Courses: New Features Course: · PeopleSoft Financial Solutions Rel 9.2 New Features Functional Training: · PeopleSoft General Ledger Rel 9.2 · PeopleSoft Payables Rel 9.2 · PeopleSoft Receivables Rel 9.2 · PeopleSoft Asset Management Rel 9.2 · Expenses Rel 9.2 · PeopleSoft Project Costing Rel 9.2 · PeopleSoft Billing Rel 9.2 · PeopleSoft PS / nVision for General Ledger Rel 9.2 Accelerated Courses (include content from two courses for more experienced team members): · PeopleSoft General Ledger Foundation Accelerated Rel 9.2 · PeopleSoft Billing / Receivables Accelerated Rel 9.2 · PeopleSoft Purchasing / Payable Accelerated Rel 9.2 View PeopleSoft Training Overview Video

    Read the article

  • Show Notes: Bob Hensle on IT Strategies from Oracle

    - by Bob Rhubart
    The latest ArchBeat Podcast (RSS) features a conversation with Oracle Enterprise Architecture director Bob Hensle (LinkedIn). Bob talks about IT Strategies from Oracle, an extensive library of reference architectures, best practices, and other documents now available (it’s a freebie!) to registered Oracle Technology Network members. Listen to Part 1 Bob offers some background on the IT Strategies from Oracle project and an overview of the included documents. Listen to Part 2 (Feb 16) A discussion of how SOA and other issues are reflected in the IT Strategies documents. Share your feedback on any of the documents in the IT Strategies from Oracle Library: [email protected] For a nice complement to the IT Strategies from Oracle Library, check out Oracle Experiences in Enterprise Architecture, an ongoing series of short essays from members of the Oracle Enterprise Architecture team based on their field experience. In the Pipeline ArchBeat programs in the works include an interview with Dr. Frank Munz, the author of Middleware and Cloud Computing, excerpts from another architect virtual meet-up, and a conversation with Oracle ACE Director Debra Lilley about her insight into Fusion Applications. . Stayed tuned: RSS Technorati Tags: oracle,oracle technology network,software architecture,enterprise architecture,reference architecture del.icio.us Tags: oracle,oracle technology network,software architecture,enterprise architecture,reference architecture

    Read the article

  • Exchange not preserving the "To:" field

    - by Matt Simmons
    I've got a hosted exchange solution through Apptix, which isn't the problem, I think, but it may be relevant. I have my main account, [email protected], and to that, I have an alias, [email protected]. Whenever I send an email to [email protected], I examine the headers, and I see the "To:" field being correct, "To: [email protected]". All is well. I recently set up another user, [email protected] to function as a multipurpose mailbox. I aliased "[email protected]" to the services account in the same method that I did "[email protected]", however nothing I have sent to "[email protected]" actually goes TO "[email protected]". All of the headers say "To: [email protected]". This makes it extremely difficult to filter based on headers alone. Does anyone have any feedback on what settings I would need to look at in order to fix that?

    Read the article

  • General Overview of Design Pattern Types

    Typically most software engineering design patterns fall into one of three categories in regards to types. Three types of software design patterns include: Creational Type Patterns Structural Type Patterns Behavioral Type Patterns The Creational Pattern type is geared toward defining the preferred methods for creating new instances of objects. An example of this type is the Singleton Pattern. The Singleton Pattern can be used if an application only needs one instance of a class. In addition, this singular instance also needs to be accessible across an application. The benefit of the Singleton Pattern is that you control both instantiation and access using this pattern. The Structural Pattern type is a way to describe the hierarchy of objects and classes so that they can be consolidated into a larger structure. An example of this type is the Façade Pattern.  The Façade Pattern is used to define a base interface so that all other interfaces inherit from the parent interface. This can be used to simplify a number of similar object interactions into one single standard interface. The Behavioral Pattern Type deals with communication between objects. An example of this type is the State Design Pattern. The State Design Pattern enables objects to alter functionality and processing based on the internal state of the object at a given time.

    Read the article

  • How To Add Image And Text Watermarks to MS Word Documents

    - by Kavitha
    Watermark is a faint image that appears behind your text in MS Word Documents. Draft/Confidential are the most common background watermarks that we see in the documents circulated at office. MS Word 2007/2010 makes it very easy add watermarks as well as customize them based on the requirements. Add Image Watermark To MS Word Document To add image watermark to your document follow these steps 1. Switch to Page Layout tab of Ribbon Menu 2. Click on Watermark drop down menu and choose Custom Watermark option 3. Choose Picture watermark option, click on the button Select Picture.. and choose watermark image 4. Click Ok. That all. You are done. Add Text Watermark To MS Word Document To add image watermark to your document follow these steps 1. Switch to Page Layout tab of Ribbon Menu 2. Click on Watermark drop down menu 3. In the opened window, you can select one of the predefined text watermarks like Confidential, Draft, ASAP, URGENT, etc. If you are looking for one of these watermarks, you can choose them otherwise click on the option Custom Watermark… 4. Choose the option Text watermark and enter the text you want to set as watermark in the input area Text: (highlighted below). 5. Click on OK button. That’s all. This article titled,How To Add Image And Text Watermarks to MS Word Documents, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SBS2003 OWA not displaying images on external side

    - by JasonC
    I've got an SBS 2003 server with companyweb/remote owa site. After authenticating remotely to the https:/servername/remote site, and then clicking on Use Outlook Web Access, the mail/owa site displays only limited amounts of data with Loading... displayed (see first image). This is different when viewing the site on the server itself (see second image). Different browsers will display different types of data (last screenshot is Chrome - first two are IE). I feel IIS permissions on the virtual directories is the cause, but not certain. Forms-based authentication etc is working ok etc. Any suggestions? Authentication and getting email via ActiveSync is not a problem. I don't want to break anything, and it's not a major problem because the users only get their email via outlook and iphones anyway without problems. I'm more interested in finding out for my own info so please give me some suggestions to try. Thanks for looking!

    Read the article

  • IASA South East Florida Chapter Meeting Recap - June 2011

    - by Sam Abraham
    Erik Russell and Giles Marino were our speakers for the June 2011 IASA South East Florida Chapter meeting.    Attendees filled all available seats at the Microsoft office conference room where the event was held. This highlights the high interest in Enterprise Architecture as a career track and chartered project role. Also in attendance were our Board of Directors and Alex Funkhouser, President, Sherlock Technology.   Rainer Habermann, Chapter President, kicked off the meeting by introducing our speakers and Board of Directors.   Alex Funkhouser, President of South Florida’s staffing firm Sherlock Technology spoke briefly about available Software Architect positions in the area. Alex also congratulated and presented this week’s Sherlock Raffle winner with $500 in cash.   Our speakers Giles and Erik then proceeded with their talk. Erik presented a business case in the government sector where Enterprise Architecture helped a government entity cut costs and streamline its various business operations. Technologies leveraged in Erik’s demonstrated project were Java-based.   Giles then followed with a thorough demonstration of the Architecture patterns he used to migrate a complete backend system for an insurance company to the .Net Platform.   Audience was very engaged with our speakers as evidenced by the large number of follow-up questions asked at the end of the talk.   We greatly enjoyed Giles and Erik’s talk and look forward to having them share with us more of their adventures as Enterprise Architects in the near future.   Below are some photos of the event.   Sam Abraham Secretary- IASA South East Florida Chapter. http://www.iasaglobal.org/iasa/South_East_Florida.asp Chapter President - Rainer Habermann kicks off our meeting.   Sherlock Technology President Alex Funkhouser holding Sherlock's weekly cash prize. Alex shares available Software Architect opportunities with our members Erik Russell addressing our membership Giles Marino sharing his architecture experience in the insurance industry In this photo: Dave Noderer, Rainer Habermann, Quent Herschelman and Alex Funkhouser. Event attracted a large audience and filled the Microsoft conference room where it was held

    Read the article

  • Keyboard doesn't work after upgrade to Debian Wheezy

    - by mikhail
    After upgrade from lenny to wheezy keyboard and mouse don't work in X (keyboard available before it starts). I looked over internet about this issue and found some solutions: remove xorg.conf (http://forums.debian.net/viewtopic.php?f=7&t=62880) update udev and base-files (http://forums.debian.net/viewtopic.php?f=6&t=64927&p=376136#p376136) remove /run directory (http://forums.debian.net/viewtopic.php?f=6&t=64927&p=376136#p376136) reintall xserver and xorg But, nothing helped me :( Logs of X-server haven't got any messages about keyboard or mouse errors. Below you can see configuration of my system: krestyaninov@xxx# uname -a Linux xxx 3.0.0-1-686-pae #1 SMP Sat Aug 27 16:41:03 UTC 2011 i686 GNU/Linux krestyaninov@xxx# dpkg -l |grep udev ii libgudev-1.0-0 172-1 GObject-based wrapper library for libudev ii libudev0 172-1 libudev shared library ii udev 172-1 /dev/ and hotplug management daemon krestyaninov@xxx# dpkg -l |grep base-files ii base-files 6.5 Debian base system miscellaneous files krestyaninov@xxx# dpkg -l |grep xorg ii xorg 1:7.6+8 X.Org X Window System ... ii xserver-xorg 1:7.6+8 X.Org X server

    Read the article

  • Keep basic game physics separate from basic game object? [on hold]

    - by metamorphosis
    If anybody has dealt with a similar situation I'd be interested in your experience/wisdom, I'm developing a 2D game library in C++, I have game objects which have very basic physics, they also have movement classes attached to differing states, for example, a different movement type based on whether the character is jumping, on ice, whatever. In terms of storing velocity and acceleration impulses, are they best held by the object? Or by the associated movement class? The reason I ask is that I can see advantages to both approaches- if you store physics data in the movement class, you have to pass physics information between class instances when a state change occurs (ie. impulses, gravity etc) but the class has total control over whether those physics are updated or not. An obvious example of how this would be useful was if an object was affected by something which caused it to ignore gravity, or something like that. on the other hand if you store the physics data in the object class, it feels more logical, you don't have to go around passing physics impulses and gravity etc, however the control that the movement class has over the object's physics becomes more convoluted. Basically the difference is between: object->physics stacks (acceleration impulses etc) ->physics functions ->movement type <-movement type makes physics function calls through object and object->movement type->physics stacks ->physics functions ->object forwards external physics calls onto movement type ->object transfers physics stacks between movement types when state change occurs Are there best practices here?

    Read the article

  • Visual Studio LightSwitch: Yes, these are the droids you&rsquo;re looking for

    - by Jim Duffy
    With all the news and focus on the new features coming in Silverlight 5 I thought I’d take a few minutes to remind folks about the work that Microsoft has done on LightSwitch since the applications created by LightSwitch are Silverlight applications. LightSwitch makes it easier for non-coders to build business applications and easier for coders to maintain them. For those not familiar with LightSwitch, it is a new tool that provides a easier and quicker way for coder and non-coder types alike to create line-of-business applications for the desktop, the web, and the cloud. The target audience for this tool are those power-user types who create Access applications for their organization. While those Access applications fill an immediate need, they typically aren’t very scalable, extendable and/or maintainable by the development staff of the organization. LightSwitch creates applications based on technologies built into Visual Studio thus making it easier for corporate developers to extend and maintain them. LightSwitch is currently in beta but it will ultimately become a new addition to the Visual Studio line of products. Go ahead and download the beta to get a better idea of what the product can do for your organization. The LightSwitch Developer Center contains links to download the beta links to instructional videos links to tutorials links to the LightSwitch Training Kit Another quality resource for LightSwitch information is the Visual Studio LightSwitch Team Blog. My good friend Beth Massi is on the LightSwitch team and has additional valuable content on her blog. Have a day.

    Read the article

  • Is duck typing a subset of polymorphism

    - by Raynos
    From Polymorphism on WIkipedia In computer science, polymorphism is a programming language feature that allows values of different data types to be handled using a uniform interface. From duck typing on Wikipedia In computer programming with object-oriented programming languages, duck typing is a style of dynamic typing in which an object's current set of methods and properties determines the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface. My interpretation is that based on duck typing, the objects methods/properties determine the valid semantics. Meaning that the objects current shape determines the interface it upholds. From polymorphism you can say a function is polymorphic if it accepts multiple different data types as long as they uphold an interface. So if a function can duck type, it can accept multiple different data types and operate on them as long as those data types have the correct methods/properties and thus uphold the interface. (Usage of the term interface is meant not as a code construct but more as a descriptive, documenting construct) What is the correct relationship between ducktyping and polymorphism ? If a language can duck type, does it mean it can do polymorphism ?

    Read the article

  • How do I fix dependency problems with the kernel in apt?

    - by Jon
    When trying to install new packages, either manually or with muon, I get these errors: jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get install kupfer [sudo] password for jon: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kupfer : Depends: python-keybinder but it is not going to be installed Recommends: python-wnck but it is not going to be installed linux-headers-generic : Depends: linux-headers-3.2.0-20-generic but it is not installable linux-image-generic : Depends: linux-image-3.2.0-20-generic but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic linux-headers-generic linux-image-generic The following packages will be upgraded: linux-generic linux-headers-generic linux-image-generic 3 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 3 not fully installed or removed. Need to get 0 B/6,658 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-20-generic; however: Package linux-image-3.2.0-20-generic is not installed. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.20.22); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-headers-generic: linux-headers-generic depends on linux-headers-3.2.0-20-generic; however: Package linux-headers-3.2.0-20-generic is not installed. dpkg: error processing linux-headers-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-image-generic linux-generic linux-headers-generic E: Sub-process /usr/bin/dpkg returned an error code (1) As indicated above, I ran sudo apt-get -f install but it still tells me there are dependency issues.

    Read the article

  • Oracle B2B 11g - Transport Layer Acknowledgement

    - by Nitesh Jain Oracle
    In Health Care Industry,Acknowledgement or Response should be sent back very fast. Once any message received, Acknowledgement should be sent back to TP. Oracle B2B provides a solution to send acknowledgement or Response from transport layer of mllp that is called as immediate acknowledgment. Immediate acknowledgment is generated and transmitted in the transport layer. It is an alternative to the functional acknowledgment, which generates after processing/validating the data in document layer. Oracle B2B provides four types of immediate acknowledgment: Default: Oracle B2B parses the incoming HL7 message and generates an acknowledgment from it. This mode uses the details from incoming payload and generate the acknowledgement based on incoming HL7 message control number, sender and application identification. By default, an Immediate ACK is a generic ACK. Trigger event can also sent back by using Map Trigger Event property. If mapping the MSH.10 of the ACK with the MSH.10 of the incoming business message is required, then enable the Map ACK Control ID property. Simple: B2B sends the predefined acknowledgment message to the sender without parsing the incoming message. Custom: Custom immediate Ack/Response mode gives a user to define their own response/acknowledgement. This is configurable using file in the Custom Immediate ACK File property. Negative: In this case, immediate ACK will be returned only in the case of exceptions.

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • How does an optimizing compiler react to a program with nested loops?

    - by D.Singh
    Say you have a bunch of nested loops. public void testMethod() { for(int i = 0; i<1203; i++){ //some computation for(int k=2; k<123; k++){ //some computation for(int j=2; j<12312; j++){ //some computation for(int l=2; l<123123; l++){ //some computation for(int p=2; p<12312; p++){ //some computation } } } } } } When the above code reaches the stage where the compiler will try to optimize it (I believe it's when the intermediate language needs to converted to machine code?), what will the compiler try to do? Is there any significant optimization that will take place? I understand that the optimizer will break up the loops by means of loop fission. But this is only per loop isn't it? What I mean with my question is will it take any action exclusively based on seeing the nested loops? Or will it just optimize the loops one by one? If the Java VM complicates the explanation then please just assume that it's C or C++ code.

    Read the article

  • JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue

    - by John-Brown.Evans
    JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue ol{margin:0;padding:0} .c18_3{vertical-align:top;width:487.3pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c20_3{vertical-align:top;width:487.3pt;border-style:solid;border-color:#ffffff;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c19_3{background-color:#ffffff} .c17_3{list-style-type:circle;margin:0;padding:0} .c12_3{list-style-type:disc;margin:0;padding:0} .c6_3{font-style:italic;font-weight:bold} .c10_3{color:inherit;text-decoration:inherit} .c1_3{font-size:10pt;font-family:"Courier New"} .c2_3{line-height:1.0;direction:ltr} .c9_3{padding-left:0pt;margin-left:72pt} .c15_3{padding-left:0pt;margin-left:36pt} .c3_3{color:#1155cc;text-decoration:underline} .c5_3{height:11pt} .c14_3{border-collapse:collapse} .c7_3{font-family:"Courier New"} .c0_3{background-color:#ffff00} .c16_3{font-size:18pt} .c8_3{font-weight:bold} .c11_3{font-size:24pt} .c13_3{font-style:italic} .c4_3{direction:ltr} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. In the first post, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g we looked at how to create a JMS queue and its dependent objects in WebLogic Server. In the previous post, JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue I showed how to write a message to that JMS queue using the QueueSend.java sample program. In this article, we will use a similar sample, the QueueReceive.java program to read the message from that queue. Please review the previous posts if you have not already done so, as they contain prerequisites for executing the sample in this article. 1. Source code The following java code will be used to read the message(s) from the JMS queue. As with the previous example, it is based on a sample program shipped with the WebLogic Server installation. The sample is not installed by default, but needs to be installed manually using the WebLogic Server Custom Installation option, together with many, other useful samples. You can either copy-paste the following code into your editor, or install all the samples. The knowledge base article in My Oracle Support: How To Install WebLogic Server and JMS Samples in WLS 10.3.x (Doc ID 1499719.1) describes how to install the samples. QueueReceive.java package examples.jms.queue; import java.util.Hashtable; import javax.jms.*; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; /** * This example shows how to establish a connection to * and receive messages from a JMS queue. The classes in this * package operate on the same JMS queue. Run the classes together to * witness messages being sent and received, and to browse the queue * for messages. This class is used to receive and remove messages * from the queue. * * @author Copyright (c) 1999-2005 by BEA Systems, Inc. All Rights Reserved. */ public class QueueReceive implements MessageListener { // Defines the JNDI context factory. public final static String JNDI_FACTORY="weblogic.jndi.WLInitialContextFactory"; // Defines the JMS connection factory for the queue. public final static String JMS_FACTORY="jms/TestConnectionFactory"; // Defines the queue. public final static String QUEUE="jms/TestJMSQueue"; private QueueConnectionFactory qconFactory; private QueueConnection qcon; private QueueSession qsession; private QueueReceiver qreceiver; private Queue queue; private boolean quit = false; /** * Message listener interface. * @param msg message */ public void onMessage(Message msg) { try { String msgText; if (msg instanceof TextMessage) { msgText = ((TextMessage)msg).getText(); } else { msgText = msg.toString(); } System.out.println("Message Received: "+ msgText ); if (msgText.equalsIgnoreCase("quit")) { synchronized(this) { quit = true; this.notifyAll(); // Notify main thread to quit } } } catch (JMSException jmse) { System.err.println("An exception occurred: "+jmse.getMessage()); } } /** * Creates all the necessary objects for receiving * messages from a JMS queue. * * @param ctx JNDI initial context * @param queueName name of queue * @exception NamingException if operation cannot be performed * @exception JMSException if JMS fails to initialize due to internal error */ public void init(Context ctx, String queueName) throws NamingException, JMSException { qconFactory = (QueueConnectionFactory) ctx.lookup(JMS_FACTORY); qcon = qconFactory.createQueueConnection(); qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE); queue = (Queue) ctx.lookup(queueName); qreceiver = qsession.createReceiver(queue); qreceiver.setMessageListener(this); qcon.start(); } /** * Closes JMS objects. * @exception JMSException if JMS fails to close objects due to internal error */ public void close()throws JMSException { qreceiver.close(); qsession.close(); qcon.close(); } /** * main() method. * * @param args WebLogic Server URL * @exception Exception if execution fails */ public static void main(String[] args) throws Exception { if (args.length != 1) { System.out.println("Usage: java examples.jms.queue.QueueReceive WebLogicURL"); return; } InitialContext ic = getInitialContext(args[0]); QueueReceive qr = new QueueReceive(); qr.init(ic, QUEUE); System.out.println( "JMS Ready To Receive Messages (To quit, send a \"quit\" message)."); // Wait until a "quit" message has been received. synchronized(qr) { while (! qr.quit) { try { qr.wait(); } catch (InterruptedException ie) {} } } qr.close(); } private static InitialContext getInitialContext(String url) throws NamingException { Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY, JNDI_FACTORY); env.put(Context.PROVIDER_URL, url); return new InitialContext(env); } } 2. How to Use This Class 2.1 From the file system on Linux This section describes how to use the class from the file system of a WebLogic Server installation. Log in to a machine with a WebLogic Server installation and create a directory to contain the source and code matching the package name, e.g. span$HOME/examples/jms/queue. Copy the above QueueReceive.java file to this directory. Set the CLASSPATH and environment to match the WebLogic server environment. Go to $MIDDLEWARE_HOME/user_projects/domains/base_domain/bin  and execute . ./setDomainEnv.sh Collect the following information required to run the script: The JNDI name of the JMS queue to use In the WebLogic server console > Services > Messaging > JMS Modules > Module name, (e.g. TestJMSModule) > JMS queue name, (e.g. TestJMSQueue) select the queue and note its JNDI name, e.g. jms/TestJMSQueue The JNDI name of the connection factory to use to connect to the queue Follow the same path as above to get the connection factory for the above queue, e.g. TestConnectionFactory and its JNDI name e.g. jms/TestConnectionFactory The URL and port of the WebLogic server running the above queue Check the JMS server for the above queue and the managed server it is targeted to, for example soa_server1. Now find the port this managed server is listening on, by looking at its entry under Environment > Servers in the WLS console, e.g. 8001 The URL for the server to be passed to the QueueReceive program will therefore be t3://host.domain:8001 e.g. t3://jbevans-lx.de.oracle.com:8001 Edit Queue Receive .java and enter the above queue name and connection factory respectively under ... public final static String JMS_FACTORY="jms/TestConnectionFactory"; ... public final static String QUEUE="jms/TestJMSQueue"; ... Compile Queue Receive .java using javac Queue Receive .java Go to the source’s top-level directory and execute it using java examples.jms.queue.Queue Receive   t3://jbevans-lx.de.oracle.com:8001 This will print a message that it is ready to receive messages or to send a “quit” message to end. The program will read all messages in the queue and print them to the standard output until it receives a message with the payload “quit”. 2.2 From JDeveloper The steps from JDeveloper are the same as those used for the previous program QueueSend.java, which is used to send a message to the queue. So we won't repeat them here. Please see the previous blog post at JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue and apply the same steps in that example to the QueueReceive.java program. This concludes the example. In the following post we will create a BPEL process which writes a message based on an XML schema to the queue.

    Read the article

  • SRs @ Oracle: How do I License Thee?

    - by [email protected]
    With the release of the new Sun Ray product last week comes the advent of a different software licensing model. Where Sun had initially taken the approach of '1 desktop device = one license', we later changed things to be '1 concurrent connection to the server software = one license', and while there were ways to tell how many connections there were at a time, it wasn't the easiest thing to do.  And, when should you measure concurrency?  At your busiest time, of course... but when might that be?  9:00 Monday morning this week might yield a different result than 9:00 Monday morning last week.In the acquisition of this desktop virtualization product suite Oracle has changed things to be, in typical Oracle fashion, simpler.  There are now two choices for customers around licensing: Named User licenses and Per Device licenses.Here's how they work, and some examples:The Rules1) A Sun Ray device, and PC running the Desktop Access Client (DAC), are both considered unique devices.OR, 2) Any user running a session on either a Sun Ray or an DAC is still just one user.So, you have a choice of path to go down.Some Examples:Here are 6 use cases I can think of right now that will help you choose the Oracle server software licensing model that is right for your business:Case 1If I have 100 Sun Rays for 100 users, and 20 of them use DAC at home that is 100 user licenses.If I have 100 Sun Rays for 100 users, and 20 of them use DAC at home that is 120 device licenses.Two cases using the same metrics - different licensing models and therefore different results.Case 2If I have 100 Sun Rays for 200 users, and 20 of them use DAC at home that is 200 user licenses.If I have 100 Sun Rays for 200 users, and 20 of them use DAC at home that is 120 device licenses.Same metrics - very different results.Case 3If I have 100 Sun Rays for 50 users, and 20 of them use DAC at home that is 50 user licenses.If I have 100 Sun Rays for 50 users, and 20 of them use DAC at home that is 120 device licenses.Same metrics - but again - very different results.Based on the way your business operates you should be able to see which of the two licensing models is most advantageous to you.Got questions?  I'll try to help.(Thanks to Brad Lackey for the clarifications!)

    Read the article

  • SnagIt Live Writer Plug-in updated

    - by Rick Strahl
    I've updated my free SnagIt Live Writer plug-in again as there have been a few issues with the new release of SnagIt 11. It appears that TechSmith has trimmed the COM object and removed a bunch of redundant functionality which has broken the older plug-in. I also updated the layout and added SnagIt's latest icons to the form. Finally I've moved the source code to Github for easier browsing and downloading for anybody interested and easier updating for me. This plug-in is not new - I created it a number of years back, but I use the hell out it both for my blogging and for a few internal apps that with MetaWebLogApi to update online content. The plug-in makes it super easy to add captured image content directly into a post and upload it to the server. What does it do? Once installed the plug-in shows up in the list of plug-ins. When you click it launches a SnagIt Capture Dialog: Typically you set the capture settings once, and then save your settings. After that a single click or ENTER press gets you off capturing. If you choose the Show in SnagIt preview window option, the image you capture is is displayed in the preview editor to mark up images, which is one of SnagIt's great strengths IMHO. The image editor has a bunch of really nice effects for framing and marking up and highlighting of images that is really sweet. Here's a capture from a previous image in the SnagIt editor where I applied the saw tooth cutout effect: Images are saved to disk and can optionally be deleted immediately, since Live Writer creates copies of original images in its own folders before uploading the files. No need to keep the originals around typically. The plug-in works with SnagIt Versions 7 and later. It's a simple thing of course - nothing magic here, but incredibly useful at least to me. If you're using Live Writer and you own a copy of SnagIt do yourself a favor and grab this and install it as a plug-in. Resources: SnagIt Windows Live Writer Plug-in Installer Source Code on GitHub Buy SnagIt© Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >