Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 623/1038 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Ubuntu installed along side Win 8 but not shown in boot

    - by Mahesha999
    Actually the question says it all, but let me tell you what I did, so u may find exactly what might have went wrong: I have Win 8 installed on 500 GB HDD. SO I shrunk it four times for: partition 1 - the original partition containing Win 8 sys (118GB) partition 2 - NTFS formatted for my data (188GB) partition 3 - NTFS formatted for my data (100GB) partition 4 - NTFS formatted for Linux distro 1 (I reformatted it to ext4 during Ubuntu installation) (25GB) partition 5 - NTFS formatted for Linux distro 1 (21GB) So now I booted Ubuntu from USB (created from ubuntu-12.04-desktop-amd64.iso) and deleted last two partitions 4 and 5 to create: partition 1 - ext4 where I installed Ubuntu (25GB) partition 2 - Swap (4GB) partition 3 - unallocated space, not formatted yet (17GB) Ubuntu installation said it installed successfully and that I have to restart to boot in Ubuntu. But when I restart Windows 8 auto booted - there was no dual boot. After that I devided above 100GB partition to 80Gb and 20GB ones (since I read online that I should have /home in separate partition for convenience, so I created 20GB partition for it) So I went on to manually create boot entry using EasyBCD as below show in picture at below link http://s19.postimage.org/dof2zuvw3/Free_BCD.png When I created the entry, FreeBCD showed the information as follows: Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=\Device\HarddiskVolume2 description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} integrityservices Enable default {ea8167ad-d189-11e1-90e4-ab2f09569dcc} resumeobject {ea8167a3-d189-11e1-90e4-ab2f09569dcc} displayorder {ea8167ad-d189-11e1-90e4-ab2f09569dcc} {ea8167b1-d189-11e1-90e4-ab2f09569dcc} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 10 displaybootmenu Yes Windows Boot Loader ------------------- identifier {ea8167ad-d189-11e1-90e4-ab2f09569dcc} device partition=C: path \Windows\system32\winload.exe description Windows 8 locale en-US osdevice partition=C: systemroot \Windows resumeobject {9bc7fdf7-3ae0-11e2-be77-806e6f6e6963} Real-mode Boot Sector --------------------- identifier {ea8167b1-d189-11e1-90e4-ab2f09569dcc} device partition=C: path \NST\AutoNeoGrub0.mbr description Ubuntu Notice the last bolded entry created. Howevever after thet, when I rebooted it firstly showed old DOS like bootloader (no Windows 8 UI based bootloader) with two entries Windows and Ubuntu. Windows 8 was booting correctly but I was getting an error while booting Ubuntu taking me to GRUB Rescue. Please help am new to Linux world.

    Read the article

  • IRC News from #netbeans on FreeNode

    - by Geertjan
    I joined the #netbeans channel on FreeNode last week and the discussions there are really great. It's so cool to not have the endless back and forth of an e-mail exchange. Instead, you can hammer out a complete solution to a problem while chatting live in the channel. A case in point was yesterday, when someone named 'charmeleon' wanted to create a NetBeans Platform based application that includes the "image" module from the NetBeans IDE sources. That way, he'd have a starting point for his own image-oriented application, since he'd not only have the NetBeans Platform, but also the sources of the "image" module. Had we been communicating via e-mail, it would have taken weeks, at least, to come to a solution. Instead, we hashed it out together live, including some very specific problems that would have been hard to communicate about via e-mail. In the end, I made a movie showing exactly the scenario that charmeleon was interested in: And, right now, in the #netbeans channel, charmeleon said: "NetBeans RCP feels like cheating once you start getting over the hump." I'm sure the fact that the hump was handled within a few hours of chatting on irc is a big contributor to that impression.

    Read the article

  • How to customize the initrd embedded in or coming with the kernel image

    - by STATUS_ACCESS_DENIED
    I would like to add some tools and not just kernel modules into the initrd (initramfs-based). Now I'm aware of how to unpack and how to pack the initrd with cpio and have even written a hook for /etc/initramfs-tools/hooks in the past to integrate a third-party kernel module. However, while the available script libraries seem to be geared towards the integration of modules, none of them seems to be for integration of other entities (in particular programs and their dependencies). What options do I have to automate the integration of some useful tools for recovery into the initrd? I'm talking about the "rescue" system that the system drops into if it is unable to mount the root drive given to it by the boot loader. Please note that I don't want the SquashFS approach as is used for Live-CDs because for the issue at hand it will be by far sufficient to include some relatively small tools that aid in recovery of the system (when it gets stuck in initrd and can't boot further). Also, the machines when they run into the issue that we have had in the past tend to boot into the rescue system, but there a few tools are missing to kick the system back on trail ...

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • How to identify RAID (5 or 6) controllers that allow dynamic resize of the array

    - by David Pfeffer
    I'm building a server with a RAID5 array, based on a hardware controller. I want to be able to later add additional disks and have the array rebalance across all of the disks, enlarging the usable size. I also want to be able to later upgrade to bigger disks (one at a time, of course) and then expand the array to fill the entire drive. These features are available in Linux software raid (md). I've also heard they're available in some hardware controllers. Currently, I own the Adaptec RAID 3805 card and the 3ware 9650se card. I'd prefer to use the Adaptec if possible, but I can't find if either of these cards offer this feature. If they don't, are there other affordable (read as: sub-$600) RAID cards available that can accomplish this?

    Read the article

  • What are the practical limits on file extension name lengths?

    - by GorillaSandwich
    I started using DOS back before Windows, and ever since have taken it for granted that Every file has a file extension, like .txt, .jpg, etc That extension is always short (usually 3 letters) I learned early that the extension is basically just a hint to the OS as to what the content type is. Eventually I got exposed to Mac and Linux, files with no extensions, etc. And of course I've seen shorter extensions, like .rb and .py. I just noticed that markdown-formatted files can have the extension .markdown, and it made me wonder - how long can that extension be? If I make it .mycrazylongextensiontypewoohoo, will certain operating systems or programs choke on the file? Are extension names generally short just for convenience, or is this based on some limitation, legacy or current?

    Read the article

  • Troubleshooting PO Output in Oracle Purchasing

    - by user793553
    Do your approved Purchase Orders occasionally not make it to the supplier, or the attached document is incorrect?  Or perhaps your users are reporting that the PO Output for Communication fails to fax, print or email.  Try the new Search Helper in Doc ID 1377764.1  First choose the radial button that pertains to your issue. Below we see a case where there is a problem with the  Purchase Order being sent to the supplier.  Then choose the symptom (or symptoms) that occur in the second section.  In this case we have chosen that the PDF attachment is malformed. Drill down to review the notes returned based on your symptom, for possible issue resolution.  Try this before logging a Service Request.   For a list of all the Procurement Search Helpers visit the Product Information Center via Doc ID 1391332.2.      

    Read the article

  • find kernel config option in menuconfig

    - by puchu
    I've upgraded my gentoo-sources today to 3.3.8, and now I am looking at diff between old kernel's defconfig and old kernel's .config: there are about 20 changes. I want to apply this changes manually to new kernel's menuconfig. Where can I find tool like: menuconfig-find -v 3.3.8-gentoo CONFIG_KVM_AMD >> Virtualization >> -> Kernel-based Virtual Machine (KVM) support >> -> KVM for AMD processors support

    Read the article

  • recommendation for configuration for a multi-core guestOS

    - by reidLinden
    Hi there, I've just received an upgraded Host machine, and am looking to push some of those advances to my workstations Guest OS(s). In particular, I used to have a single processor, with 2 cores, so my guestOS only had 1/1. Now, I've got a single processor with 8 cores, so I'm curious about what would be recommended for my GuestOS now? 1 processor/4 cores? 2 processors/2Cores? 4 processors/1 core? My instinct says to stick with the number of physical processors (or less), but, is that based on reality? I spent a good while looking for an answer to this, but perhaps my google-karma isn't in my favor today. Suggestions?

    Read the article

  • Do you know of any ways to effectively manage a triple screen setup brightness?

    - by nizm0
    I use a desktop with multiple monitors I have tried F.lux and it is a great app for changing the color from 3500k-6500k... but doesn't adjust screen brightness. (I recommend you try it) I've googled for hours trying to find an app that will work to let me easily manage screen brightness either by a "dim when idle" or a dynamically updating screen brightness based on the time of day. Windows 7 desktop doesn't seem to have options for me to adjust brightness on the system, and manually changing three separate monitors is too much of a chore? Anybody have some recommendations? Perhaps a way to script this, knowledge of how to use the supposed dynamic screen brightness functions that Windows was supposed to add in Windows 7?

    Read the article

  • filtering dates in a data view webpart when using webservices datasource

    - by Patrick Olurotimi Ige
    I was working on a data view web part recently and i had  to filter the data based on dates.Since the data source was web services i couldn't use  the Offset which i blogged about earlier.When using web services to pull data in sharepoint designer you would have to use xpath.So for example this is the soap that populates the rows<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row/>But you would need to add some predicate [] and filter the date nodes.So you can do something like this (marked in red)<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row[ddwrt:FormatDateTime(string(@ows_Created),1033,'yyyyMMdd') &gt;= ddwrt:FormatDateTime(string(substring-after($fd,'#')),1033,'yyyyMMdd')]"/>For the filtering to work you need to have the date formatted  above as yyyyMMdd.One more thing you must have noticed is the $fd variable.This variable is created by me creating a calculated column in the list so something like this [Created]-2So basically that the xpath is doing is get me data only when the Created date  is greater than or equal to the Created date -2 which is 2 date less than the created date.Also not that when using web services in sharepoint designer and try to use the default filtering you won't get to see greater tha or less than in the option list comparison.:(Hope this helps.

    Read the article

  • How to start WebLogic Server using default scripts?

    - by Luz Mestre-Oracle
    There are a few common issues reported when starting weblogic server using scripts. 1. User is not able to access weblogic console. 2. After a few days/hours weblogic server stops abruptly. 3. When user closes putty, they are not able to connect to weblogic server anymore. 4. When user closes windows command prompt, they are not able to connect to weblogic server anymore. 5. Weblogic is started using startManagedWebLogic.cmd/startManagedWebLogic.sh. By default, WebLogic Server does not run in background mode, so after you close the window the process finishes as well. In Linux/Unix based platforms, you need to use: nohup ./startManagedWebLogic.sh <Server> <URL> & In Windows platforms, you need to start Managed Servers using Windows Services: How to Install MS Windows Services For FMW 11g WebLogic Domain Admin and Managed Servers (Doc ID 1060058.1) http://docs.oracle.com/cd/E23943_01/web.1111/e13708/winservice.htm There a few more reasons that could cause similar symptoms, like JVM crash, signals sent by the Operating System, and many other reasons.  But the above steps is the first one to start. Enjoy!

    Read the article

  • trigger script on postfix delivery errors

    - by edovino
    I'm trying to get postfix to run a script on soft (4xx) and hard (5xx) delivery errors, but I'm not sure where to start. If I understand things correctly, I could insert (pipe-based) filters in the master.cf file, there's a whole 'milter' infrastructure available, an finally I suppose I could simply grep through the mail.info logs. So - any advice? Should I go the 'handle it via master.cf' route, and if so, what daemon should I intercept? 'bounce'? The grep-the-logs route is probably simplest, but I can't help but feel that there is a better way. Any advice appreciated!

    Read the article

  • Storing bundled AMI:s at Amazon EC2

    - by Industrial
    Hi everybody, I am totally new on configuring servers and working with EC2, so please bare with me. I managed after a lot of hair pulling to get a server with Ubuntu up and running with memcached and some other goodies that would make a great package for me. I thought that however, when storing it as an AMI with this tool I would be able to have memcached available next time I launched an instance based upon that image. What can I do to make sure that my configuration is saved properly to an instance? Question number two: - Can I someway make a command that is automatically run on server creation, like initiating memcache with "memcache -d -m 1700 -u root" or even a batch of them?

    Read the article

  • Email server for huge number of subscriber

    - by bogha
    My question is that my company is thinking of providing a free email account for each of its customers. As a new company we will assume that our corporate email system will be MS Exchange server which will support about 1000 employees. They are asking why not adding the customer list to be a part of Exchange users. My suggestion was to separate the two systems, for the corporate we can use Exchange but for customers (around 30000) we have to use a Linux based system. My only argument was that Linux can be used for enterprise services like this and Microsoft may fail. What do you suggest? And if you are with me on choosing Linux as the server platform, what do you suggest to use as an alternative for Exchange in Linux? Thank you.

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Is there a free tool/package that can monitor web traffic and display URLS accessed? [closed]

    - by Anthony
    I couldn't find a similar question but then maybe I am searching for the wrong terms. A few years ago I used a router like device, I'm pretty sure it was a SonicWall, that did this on a clients site. Basically all traffic would be routed through this device and it allowed the manager/administrator to inspect web usage of the workers, determine how often certain resources were accessed and block them if necessary (much like content filter). It showed reports based on domain name reached etc. Facebbok.com, Bebo.com and so on. It also displayed the usual IP traffic information etc. it was a UTM also. I have tried Endian firewall, with it's NTOP install, but I don't think that will show URLs browsed. Maybe I just haven't found it in NTOP yet? I need this to troubleshoot connection and traffic issue at my home, with about twenty devices/users so didn't want to buy a dedicated solution and have spare hardware to use a community product.

    Read the article

  • How do I drag my widgets without dragging other widgets?

    - by Cypher
    I have a bunch of drag-able widgets on screen. When I am dragging one of the widgets around, if I drag the mouse over another widget, that widget then gets "snagged" and is also dragged around. While this is kind of a neat thing and I can think of a few game ideas based on that alone, that was not intended. :-P Background Info I have a Widget class that is the basis for my user interface controls. It has a bunch of properties that define it's size, position, image information, etc. It also defines some events, OnMouseOver, OnMouseOut, OnMouseClick, etc. All of the event handler functions are virtual, so that child objects can override them and make use of their implementation without duplicating code. Widgets are not aware of each other. They cannot tell each other, "Hey, I'm dragging so bugger off!" Source Code Here's where the widget gets updated (every frame): public virtual void Update( MouseComponent mouse, KeyboardComponent keyboard ) { // update position if the widget is being dragged if ( this.IsDragging ) { this.Left -= (int)( mouse.LastPosition.X - mouse.Position.X ); this.Top -= (int)( mouse.LastPosition.Y - mouse.Position.Y ); } ... // define and throw other events if ( !this.WasMouseOver && this.IsMouseOver && mouse.IsButtonDown( MouseButton.Left ) ) { this.IsMouseDown = true; this.MouseDown( mouse, new EventArgs() ); } ... // define and throw other events } And here's the OnMouseDown event where the IsDraggable property gets set: public virtual void OnMouseDown( object sender, EventArgs args ) { if ( this.IsDraggable ) { this.IsDragging = true; } } Problem Looking at the source code, it's obvious why this is happening. The OnMouseDown event gets fired whenever the mouse is hovered over the Widget and when the left mouse button is "down" (but not necessarily in that order!). That means that even if I hold the mouse down somewhere else on screen, and simply move it over anything that IsDraggable, it will "hook" onto the mouse and go for a ride. So, now that it's obvious that I'm Doing It Wrong™, how do I do this correctly?

    Read the article

  • game play strategy in an arena

    - by joulesm
    I am writing a player's behavior for an arena game, and I'm wondering if you can offer some strategies. I'm writing it in Python, but I'm just interested in the high level game play. Here are the game aspects: Arena is a circle of a given size. The arena size shrinks every round to help break ties. Players are much smaller circles, can be on teams of 1 or 2 players. Players attack by colliding with other players, and based on the physics of the collision (speed of both players, angle), one could force another player out of the arena. Once a player is out of the arena, they are out of the game (for that round). The goal is to be the only team with players left in the arena. All other players have been pushed (through collisions or mistakes) out of the arena. It is possible for there to be no winner if the last two players exit the arena at the same time. Once the player has been programmed, the game just runs. There is no human intervention in the game. I'm thinking it's easiest to implement a few simple programmatic rules for my player to follow. For example, stay close to center of the arena, attack opponents from the inner side of the arena, etc. Are there any good simple game strategies? Would adding a random aspect to the game help? For example, to avoid predictability by the other team or something. Thanks in advance.

    Read the article

  • Symfony sfControlPanel 500 error only when executing 'tasks'

    - by rrlange
    As a bit of background, I recently had to restore a Symfony site from backup. Ever since the 'successful' restore, I am running into an exception when trying to execute a(ny) task via the web-based sfControlPanel: Unable to find PHP executable stack trace * at sfToolkit::getPhpCli() in SF_ROOT_DIR/plugins/sfControlPanelPlugin/modules/sfControlPanel/actions/actions.class.php line 93 FYI: symfony version 1.0.6 PHP 5.2.6 (cli) (built: May 2 2008 16:06:40) Apache/2.2.3 CentOS 5.* Thank you very much for any and all suggestions as to what might be amiss. Addendum: I neglected to mention that I can the frontend app (and certain backend apps) perfectly fine via the web. I am also able to run common tasks via the command line (cc etc.).

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • automated printouts from a wireless printer

    - by Piotr
    I have a wireless printer which is always on, and an always on fanless linux server. Looking at the mprinter project on Kickstarter I started to wonder if there is an app somewhere in the internet already that will allow to prepare an automated daily printout based on some settings. things to be printed could include - weather forecast for my locations, todo`s scheduled for that day, a "quote of a day" or "word of the day", stats from google analytics for my site, and many more ... I would set a printout at 6:15 every work day so its on my printer when I am already up, having a coffee. anyone knows something that can be used for such purpose? While I know this can be done by combining the power of TeX, cron and a script language to manage the dynamic part of the PDF I believe this is a use case someone has already addressed.

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • Ubuntu+Mono+Postgres+ASP.NET 4.0. No problem?

    - by wreck_of_u
    Would this be ok? I'm an ASP.NET developer and I'm planning to build "portable" web app servers based on Atom D510 mini-ITX. I have ran Ubuntu 10 with MySQL along with a separate IIS machines (win 2k3, 2k8) before with no problems. But now I'm thinking of "packaging" a web/db server into one small, cheap machine. I thought of Ubuntu/Mono/Postgres/ASP.NET, that it would be a good idea but I'm not sure? I have not actually tried it yet. Your thoughts?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >