Search Results

Search found 9727 results on 390 pages for 'llblgen pro'.

Page 2/390 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Installing Ubuntu 14.04 on macbook pro EFI

    - by user279771
    Macbook pro: mavericks, 5.2, graphics: nvidia geforce 9600M I followed the guides from here: https://help.ubuntu.com/community/UEFIBooting#Detect_.28U.29EFI_firmware_processor_architecture and http://www.rodsbooks.com/ubuntu-efi/. What I have is the following: /dev/sda apple partitions /boot / (root) swap When installing ubuntu, I did not install the boot loader during installation but did so in a chroot environment after installing grub-efi. I installed grub to /dev/sda1 (efi) which created the grub64.efi file in efi/ubuntu. This allows the refined boot manager to bring up grub and select ubuntu however, the graphics does not work. Even after adding nomodeset and removing quiet/splash from the kernel parameters. Any ideas on what could be wrong? To be clear, if I remove quiet/splash, I can see all the text startup messages being printed out however, the display manager doesn't appear to start (the screen stays black). Oddly enough though, the ubuntu startup sound can be heard.

    Read the article

  • Boot from Ubuntu ISO on a hfsplus partition (macbook pro)

    - by user279771
    I would like to be able to boot from an ISO stored on an HFS+ partition (the main partition on my macbook pro). Here is what I've done so far: (writing in shorthand :D) grub> insmod hfs,hfsplus,loopback,part_gpt grub> loopback loop (hd0,gpt2)/location/to/img.io grub> configfile (loop)/boot/grub/loopback.cfg ... This does not work. tab-complete of the (loop) path does not work... However, this does work (tab-complete and all) if the iso comes from my ext3 partition. For particular reasons, I can't have the iso images on the ext3 partition, they need to be kept on the hfs+ partition. What should be done?

    Read the article

  • LLBLGen - DeleteMulti

    - by Neil
    I have a checkboxlist of categories, during an update, I am trying to delete all the items in list from the associate table, then insert them all (so I don't have to determine if an item already exists in the associate table and only insert newly checked items). Here is the code I have: // First we need to delete the records from ArticleTopicCategory where articleId is the id of the article we are updating List<Guid> categoriesToDelete = new List<Guid>(); foreach (Guid category in this.View.SelectedCategories) { categoriesToDelete.Add(category); } ArticleTopicCategoryCollection articleCategories = new ArticleTopicCategoryCollection(); PredicateExpression filter = new PredicateExpression(ArticleTopicCategoryFields.Id == categoriesToDelete); articleCategories.DeleteMulti(filter); 'categoriesToDelete' holds a valid list of Guid's that need to be deleted, but they are not being deleted. Thanks in advance!

    Read the article

  • LLBLGEN: Linq to LLBGEN don't work

    - by StreamT
    I want to make custom select from the database table using Linq. We use LLBGEN as ORM solution. I can't do LINQ query to Entities Collection Class unless I call GetMulti(null) method of it. Is it possible to do LINQ query to LLBGEN without extracting all table first? BatchCollection batches = new BatchCollection(); BatchEntity batch = batches.AsQueryable() .Where(i => i.RegisterID == 3) .FirstOrDefault(); // Exception: Sequence don't contains any elements batches = new BatchCollection(); batches.GetMulti(null); // I don't want to extract the whole table. BatchEntity batch = batches.AsQueryable() .Where(i => i.RegisterID == 3) .FirstOrDefault(); //Works fine

    Read the article

  • Can't get wireless on macbook pro 8,2

    - by Jeff
    I'm a linux Newb, and I have tried several of the fixes listed to try and get my wifi drivers to work, but to no avail. Does anyone here know why this isn't working for me, or better yet, how to fix it? Under lspci -vvv I get the following output: 03:00.0 Network controller: Broadcom Corporation BCM4331 802.11a/b/g/n (rev 02) Subsystem: Apple Inc. AirPort Extreme Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- Kernel modules: bcma With sudo lshw -class network I get this output: *-network UNCLAIMED description: Network controller product: BCM4331 802.11a/b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:b0600000-b0603fff Any help would be greatly appreciated!

    Read the article

  • Macbook Pro 8,2 Graphics switching - Ubuntu 12.04

    - by fgs
    I've been reading docs and various pages for a few hours now and can't seem to put all of the pieces together on this. Basically I am trying to get 12.04 installed on my MBP 8,2 with graphics card switching working in some way or another. My basic understanding is that I need to do an EFI boot install of ubuntu so that graphics card switching will work (due to the hardware design). From there I may be able to use one of the kernel modules for graphics switching: https://help.ubuntu.com/community/HybridGraphics That article isn't clear on whether I need to do an EFI install. I have also seen comments in posts here that say and EFI install works by default as long as you have refit installed. Overall, I'm quite lost as to the simplest way to proceed to get an install up and running with graphics switching. I don't mind using open source GFX drivers as long as the basics work. Any help towards a solution is greatly appreciated.

    Read the article

  • Digital Audio Output light on on MacBook Pro

    - by Emerson Hsieh
    I don't know if this problem happened when I installed Ubuntu before. Recently I noticed that when I boot Ubuntu, the Digital Audio Output light automatically switches on. Digital Audio Output light on means "Something wrong in the headphone port". Although my headphone is working in Ubuntu. I've heard that the headphone contains some magical "switch" that will fix the light problem. So I poked the headphone port with chopsticks, pens, paper clips, even my finger, and the Digital Audio Output light still stays on. I don't have this problem in OSX. How do I switch the light off?

    Read the article

  • Wifi problem in ubuntu using macbook pro when it restart

    - by Amro
    I was read that subject : http://ubuntuforums.org/showthread.php?t=2011756 and i follow it step by step in the page n 1 , then i was connect but after i restart my macbook again , i was lost the wifi connection.i dont know why or whats the problem exactly. every time I run this command: dmesg | grep -e b43 -e bcma I get this output: [ 2012.769684] bcma-pci-bridge 0000:02:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 2012.769701] bcma-pci-bridge 0000:02:00.0: setting latency timer to 64 [ 2012.769775] bcma: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x25, class 0x0) [ 2012.769808] bcma: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x1D, class 0x0) [ 2012.769889] bcma: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x13, class 0x0) [ 2012.770175] bcma: PMU resource config unknown for device 0x4331 [ 2012.824527] bcma: Bus registered [ 2012.831744] b43-phy0: Broadcom 4331 WLAN found (core revision 29) [ 2013.371031] b43-phy0: Loading firmware version 666.2 (2011-02-23 01:15:07) and to get the connection again every time i must entery that code in the step of reload driver. How i can let the ubuntu see my wifi and wireless device automatically when i reboot my computer????

    Read the article

  • Virtual Win XP Mode stopped HP LJ Pro M1212nf MFP printing in Win 7 Pro

    - by Dee
    Virtual Win XP Mode stopped HP LJ Pro M1212nf MFP printing in Win 7 Pro: I am running Windows 7 Pro with Virtual Windows XP Mode. My printer is HP LaserJet Pro M1212nf MFP attached directly to a USB port of the computer. This printer was working fine in Windows 7, until I tried to attach the printer to the Virtual Windows XP Mode in order to load the printer driver in the Virtual Windows XP Mode. At that point, the printer disappeared from the list of USB devices on the toolbar at the top of the window of the Virtual Windows XP Mode. After installing the printer driver in the Virtual Windows XP Mode, the printer did not work in that mode and also no longer worked in Windows 7. In Windows 7 and in the Virtual Windows XP Mode, print files are sent to the print queue, but are never printed. In Windows 7, the print queue states that the printer is offline. In the Virtual Windows XP Mode, the printer can be toggled from "Print Offline" to "Print Online", but no print files are ever printed from the print queue. The printer acts as though it is no longer connected to the computer, even though it is still physically connected to the USB port of the computer. How can I get the printer to work again in Windows 7? (At this point, I am no longer interested in using the Virtual Windows XP Mode.) I have tried a large number of things to find and fix the printer problem, but have had no success. Device Manager cannot see the printer even though it is physically connected via USB port (have tried different USB ports) to the computer. Restoring Win 7 and Virtual Win XP Mode to times before the problem does not fix the problem. How can I get the computer to see the printer, so that I can print again in Win 7?

    Read the article

  • Does the retina MacBook Pro Australian charger duckhead have 3 prongs on it or 2?

    - by frenchglen
    I notice that with my Retina MacBook Pro (bought in UK), its charger uses all three prongs to power the laptop (like this, rather than this). Not surprised, it's a powerful machine. I'm going to Australia now and I want to use an aussie duckhead and make use of the innovative design. I have an existing one from an old iPod (and it used to work with fine with my Macbook Air when in australia), but it's only two prongs (like this). Is that enough to power my rMBP? Or will it damage/strain it and I need to find a 3 prong one (if it even exists - I haven't bought my MBP in Australia) like this? Thanks

    Read the article

  • macbook pro for developer

    - by Michael Ellick Ang
    Which of the following choices would be more beneficial to developers ? 13 inch Macbook Pro, Core 2 Duo, 4 GB Memory, 128 GB SSD - $1550 - Faster Storage 13 inch Macbook Pro, Core 2 Duo, 8 GB Memory, 250 GB HD - $1600 - More Memory 15 inch Macbook Pro, Core i5, 4 GB Memory, 320 GB HD - $1800 - Better CPU Thanks.

    Read the article

  • exporting clip in Final Cut Pro X or related video editing software on Mac

    - by user46976
    I'm using Final Cut Pro X to edit a 1 hour long video. I made individual clips from it in Final Cut Pro X and I want to save just these clips, some of which are only 5 mins long. How can I do this? I tried using the app ClipExporter, but it won't even read my .fcpxml file, it just says that it's not a valid file and gives no helpful information at all. Another method I tried was to assign roles to each clip. I made one clip, 5 mins long, and then used Share - Export in Final Cut Pro X and chose the option to export roles as separate files. However, the export still estimates that it will take over an hour to export and so it looks like it's trying to export the whole movie, rather than the simple 5 min clip which should be exportable as a .MOV or related formats in a few minutes. How can I do this in final cut pro x? I'm also happy to switch to related video editing software as long as they are not extremely expensive. This seems like a very trivial and obvious feature: take a segment from a long movie and export just the selected region of it... I don't understand why it's so complicated to do in Final Cut Pro X. Thanks.

    Read the article

  • External Dell Display doesn't work with MacBook Pro (2011) after Thunderbolt Firmware Update (1.0 and 1.2)

    - by tom
    Today two Thunderbolt Firmware Updates (1.0 and 1.2) became available for my MacBook Pro (Early 2011). After installing both, my external monitor, a Dell U2713HM, does no longer work. The system detects the display, but the display shows only black. An Apple Thunderbolt display works fine and a MacBook Air can use the Dell monitor without problems. My MacBook Pro can use the Dell monitor just fine when I boot from a USB stick. Therefore, clearly the Thunderbolt Firmware Update seems to be the problem. Does anyone have the same problem? Any solutions or workarounds? I guess there is no way to remove a Thunderbolt Firmware Update once it's installed, right? Update 24.10.2013: Is there no one else with this problem? In the meantime I tried three different cables – none worked. My colleague with the same generation MacBook Pro also can't use my display after installing the firmware update. All colleagues with MacBook Airs and newer MacBook Pros (all didn't receive the firmware update) can use the display. Update 29.10.2013: Wow, ok today my new MacBook Pro Retina 13' (Late 2013) arrived. Guess what, I cannot use the display with it. Only HDMI works – not with the full resolution.

    Read the article

  • LLBLGen Pro v3.0 has been released!

    After two years of hard work we released v3.0 of LLBLGen Pro today! V3.0 comes with a completely new designer which has been developed from the ground up for .NET 3.5 and higher. Below I'll briefly mention some highlights of this new release: Entity Framework (v1 & v4) support NHibernate support (hbm.xml mappings & FluentNHibernate mappings) Linq to SQL support Allows both Model first and Database first development, or a mixture of both .NET 4.0 support Model views Grouping...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • LLBLGen Pro v3.0 has been released!

    After two years of hard work we released v3.0 of LLBLGen Pro today! V3.0 comes with a completely new designer which has been developed from the ground up for .NET 3.5 and higher. Below I'll briefly mention some highlights of this new release: Entity Framework (v1 & v4) support NHibernate support (hbm.xml mappings & FluentNHibernate mappings) Linq to SQL support Allows both Model first and Database first development, or a mixture of both .NET 4.0 support Model views Grouping...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Remote connect into macbook pro at a different resolution

    - by user60277
    Hello, I have a Dell laptop with Windows 7 on it. Its resolution is 1920x1080. I want to connect to a macbook pro at that resolution. The macbook pro has a resolution of 1440x900 so when I VNC into it, I can only see 1440x900 box with black borders on full resolution. The macbook pro can drive resolutions of 2560x1440. What program do I use to connect to the macbook at full (1920x1080) resolution. I can use remote desktop and connect from the dell laptop to another dell laptop that has a 1440x900 max. resolution. However in case of Remote desktop connection I can expand the window to be 1920x1080. I'm using TightVNC viewer on Windows. Thanks

    Read the article

  • Remote connect into macbook pro at a different resolution

    - by user60277
    Hello, I have a Dell laptop with Windows 7 on it. Its resolution is 1920x1080. I want to connect to a macbook pro at that resolution. The macbook pro has a resolution of 1440x900 so when I VNC into it, I can only see 1440x900 box with black borders on full resolution. The macbook pro can drive resolutions of 2560x1440. What program do I use to connect to the macbook at full (1920x1080) resolution. I can use remote desktop and connect from the dell laptop to another dell laptop that has a 1440x900 max. resolution. However in case of Remote desktop connection I can expand the window to be 1920x1080. I'm using TightVNC viewer on Windows. Thanks

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Use MacBook Pro airport for injection with kismac

    - by Am1rr3zA
    Hi, I want use kismac to break my WEP wireless password with my MacBook Pro but when I go to step that want to inject packets it's doesn't do anything! I some where read that MacBook Pro wireless cards don't support injection! Is that right? How can I solve this? If I must buy USB Wireless Modem which one is better ?

    Read the article

  • speakers dont work in macbook pro

    - by Ali_IT
    I have a macbook pro but my Built-in stereo speakers don't work and it comes a red light from Headphone out/optical digital audio out port. my Built-in stereo speakers aren't dead because at first that OS runs it comes a sound from them but as soon as the macbook pro is ready when i play music they don't work and In the SOUND in system preferences the name of device for sound output is Digital Out. Is the problem from hardware or software. Is there any solution?

    Read the article

  • Import/rip/convert DVD to Adobe Premiere Pro for Mac

    - by alexyu2010
    For those who want to edit their videos, Adobe Premiere Pro will inevitably a good choice, it is a professional, real time, timeline based video editing software application that supports many video editing cards and plug-ins for accelerated processing, additional file format support and video/audio effects. Although Adobe Premiere Pro is said to be for professionals, is not so complicated that a hobbyist can't excel at using it in an hour or so. General file formats supported by Adobe Premiere Pro Up to now, Adobe Creative Suite has released several versions of Adobe Premiere Pro, including Adobe Premiere 1.0, Adobe Premiere 2.0, Adobe Premiere Pro CS3, Adobe Premiere Pro CS4 and the newly published Adobe Premiere Pro CS5. Although I saw diversity in file formats they support, I did find some common file formats supported by all of them, such as AVI, MOV, MPG. Importing DVD, Adobe Premiere Pro says "NO" It is obvious to all of us that Adobe Premiere Pro will never give DVD a hug, and it isn't rare to see that many people are really confused when they want to import their DVDs to Adobe Premiere Pro for editing. What to do? Yes, you may have noticed that, there is only a way out, that is ripping your DVDs to some formats workable with Adobe Premiere Pro natively, and this is what DVD to Adobe Premiere Pro can do. Importing DVD to Adobe Premiere Pro on Mac DVD to Adobe Premiere Pro converter for Mac is the specially designed application for ripping/converting DVD movies, DVD VOB files or DVD clips to Adobe Premiere Pro compatible AVI, MOV, MPG files with either DVD ripping tool and video converting tool within the versatile DVD to Adobe Premiere Pro converter who is a powerful program for dealing with DVD and videos perfectly. Mac DVD to Adobe Premiere Pro converter can work with a wide variety of files including DVD, VOB, AVI, WMV, MPG, MOV, MP4, DV, FLV, MKV, ASF, SWF, HD video for using with other editing tools like iMovie, FCP etc, play on QuickTime, iTunes, put on portable devices like iPod, iPhone, iPad, iRiver, BlackBerry, Gphone, Mobile Phone or upload to webistes such as YouTube, MySpace. DVD to Adobe Premiere Pro converter for Mac can also help you do some basic editing. You can trim, crop your DVD movie or DVD clip, apply special effect to make it more artistic, merge several DVD clips to a single one or tweak the output parameters for video and audio separately to get a better quality rendering. Besides, to get a good common of the process the preview widnows is also available for you.

    Read the article

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • Restore Bak File created on Windows XP pro in Windows XP Pro 64

    - by Kobojunkie
    I have a situation that I need help with. I backed up my Files on Windows XP using the System BackUp utility/wizard, and then Installed a new operating system on the machine. Now I want to restore my old files via the .bak file but it is not being recognized at all. Did I do this wrong or is there a way to still get back my old files on my new OS ? Thanks in advance!

    Read the article

  • MacBook Pro with Time Capsule can not see Samsung CLX 3175FW wireless printer on Bonjour

    - by syncopat
    I have a MacBook Pro running OS X 10.6 and another MacBook Pro running OS X 10.5. Neither see my Samsung printer when I click on Bonjour. Needless to say neither will print. I have a Time Capsule connected wirelessly to my MacBook Pros. I have tried reinstalling drivers for the printer but nothing seems to work. I tried this approach because when Apple replaced my Time Capsule and I went to print the way I had initially been running printing requests would get hung up. Any suggestions would be helpful?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >