Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 699/837 | < Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >

  • Training v. Teaching

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/05/28/training-v.-teaching.aspxAs some of you may know, I recently accepted a position to teach an undergraduate course at my alma mater. Yesterday, I had my first day in an academic classroom. I immediately noticed a difference with the interactions between the students. They don't act like students in a professional training or conference talk. I wanted to use this opportunity to enumerate some of those differences. The immediate thing I noticed was the lack of open environment. This is not to say the class was hostile towards me. I am used to entering the room, bantering with audience, loosening everyone a bit, and flowing into the discussion. A purely academic audience does not banter. At least, they do not banter on day one. I think I can attribute this to two factors. This first is a greater perception of authority. In a training or conference environment, I am an equal with the audience. This is true even if I am being a subject matter expert. We're all professionals. We're all there to learn from each other, share our stories, and enjoy the journey. In the academic classroom, there was a distinct class difference. I had forgotten about this distinction; I had the professional familiarity with the staff by the time I completed my masters. This leads to the other distinction. These was an expectation of performance. At conference and professional training, there is generally no (immediate) grading. This may be a preparation for a certification exam, but I'm not the one responsible for delivering the exam. This was not the case in the academic classroom. These students are battling for points, and I am the sole arbiter. These students are less likely to let the material wash over them, applying the material to their past experiences. They were down taking notes. I don't want to leave the impression that there was no interact in the classroom. I spent a good deal of time doing problems with the class on the whiteboard. I tried to get the class to help me work out the steps. This opened up a few of them. After every conference or training class, I always get a few people that will email me afterward to continue the conversation. I am very curious to see if anybody comes to my office hours tomorrow. However, that is a curiosity that will have to wait until tomorrow.

    Read the article

  • Designing a completly new database/gui solution for my compnay

    - by user1277304
    I'm no expert when it come to Everything Visual Studio 2010 and utilizing SQL server 2008. I'm sure some of my personal projects I've built for personal use would get laughed off the face of the planet, but SQLCe has been the solution I was looking for those home type of projects. And they work, flawlessly. Now I feel it's time to step up to the big league. I want to develop a complete, unified and module based solution for my compnay that I'm working for. We're still using stuff from the 80s for goodness sake! I use Excel and query the ancient database on my own because I can't stand the GUI. Nothing against people of age, but the IDE our programmers are using is from the stone age, and they use APL of all things with it. I've yet to see a radio buttton control anywhere in the GUI where it would make sense. Anyway, I want to do this right from the ground up. I'm by no means a newbie when it comes to programming in .NET 2010, however, I want the entire solution to be professionaly done. I want version control, test projects, project flow, SQL 2008 integration and all the bells and whistles that come with that. I know for a fact that if we had something like that runnning, not only would development costs and time be slashed four fold, but the possibilities for expansion and performance would sky rocket. (Between the GUI an our DB engine, it can only use ONE CORE! ONE! It's 2012 for goodness sake!) Our buisness is growing and our current ancient solution just can't keep up, and I'd hate to see our buisness go down in flames because our programmer is stuck in the 80's and refuses to use anything current. So I ask you guys, the experts and know-it-alls, where do I start? Are there any gems of good books out there in the haystack of all "This for dummies" type of deals? I already have several people backing me in this endevour, and while it may seem brash to just usurp the current programmers, I'm doing this for the company as a whole. Thank you guys for your time.

    Read the article

  • My co-worker has not been doing such a good job for the past decade. What do I do? [closed]

    - by stijn
    Possible Duplicate: How do I approach a coworker about his or her code quality? I started working with him almost a decade ago and back then I had never really programmed before, being a young hardware engineer. Right now however I have made quite some progress in all areas being part of software design and i am much, much more skilled than my co-worker who is 15 years older and has been programming more than twice as long. He is super nice and definitely smart enough, but lately his lack of skill and performance are starting to drag me down because we're more and more working on the same codebase. And soon we are going to do a quite ambitious start from scratch creating a whole new hard/software system. I feel it is time to address all issues now, but i do not know how to start. Here are some of the things that I would like to see him improve on: no consistent usage of style, spaces nor tabs (eg if(something ) a =b ) adds newlines around pieces of code to make it easier to read, then commits those with messages like 'no changes made' overall commit messages are useless and so are most of the comments, if there are any (eg 'remove solves for bug Rik' if Rik reported a bug). There is no function/class documentation. lots of spelling errors, in both English and native language, which sometimes are mixed 6/7/8 level deep deep nesting is no exception, a lot of functions start with one level already like if(ptr!=Null){ even when ptr is the result of allocation via new in the constructor numerous source files have over 10k lines of those lines, a major part is simply a result of copy-pasting functionality instead of using a function. This includes copying comments so we end up with 50 occurrences of var=NULL; //TODO TEST this!!!!!!! another part is hundreds of lines of dead code knows what versioning does, yet comments out old code and places new code underneath it when making changes coding skills are below par, especially for the type of rather high precision applications we do. Yet somehow, after a lot of trying and testing, stuff starts to work. But then breaks again some time later because every change casues a waterfall effect. violates every single item in the C++ FAQ lite, practices every bad practice I can think of still doesn't know how to properly use the debugger, but spends hours inspecting messy logfiles in notepad on a tiny laptop screen. Does not make any adjustments to the settings of the software he uses. Never uses keyboard shortcuts. does not seem to progress or learn new things at all. Work rather slow, mostly due to the lack of planning and incorrect usage of tools. How does one deal with this? For starters, how do I make him aware of all these problems? Should I tell the staff about it? And the next step, how to get him to learn new things and adopt another way of working?

    Read the article

  • NHibernate Session Load vs Get when using Table per Hierarchy. Always use ISession.Get&lt;T&gt; for TPH to work.

    - by Rohit Gupta
    Originally posted on: http://geekswithblogs.net/rgupta/archive/2014/06/01/nhibernate-session-load-vs-get-when-using-table-per-hierarchy.aspxNHibernate ISession has two methods on it : Load and Get. Load allows the entity to be loaded lazily, meaning the actual call to the database is made only when properties on the entity being loaded is first accessed. Additionally, if the entity has already been loaded into NHibernate Cache, then the entity is loaded directly from the cache instead of querying the underlying database. ISession.Get<T> instead makes the call to the database, every time it is invoked. With this background, it is obvious that we would prefer ISession.Load<T> over ISession.Get<T> most of the times for performance reasons to avoid making the expensive call to the database. let us consider the impact of using ISession.Load<T> when we are using the Table per Hierarchy implementation of NHibernate. Thus we have base class/ table Animal, there is a derived class named Snake with the Discriminator column being Type which in this case is “Snake”. If we load This Snake entity using the Repository for Animal, we would have a entity loaded, as shown below: public T GetByKey(object key, bool lazy = false) { if (lazy) return CurrentSession.Load<T>(key); return CurrentSession.Get<T>(key); } var tRepo = new NHibernateReadWriteRepository<TPHAnimal>(); var animal = tRepo.GetByKey(new Guid("602DAB56-D1BD-4ECC-B4BB-1C14BF87F47B"), true); var snake = animal as Snake; snake is null As you can see that the animal entity retrieved from the database cannot be cast to Snake even though the entity is actually a snake. The reason being ISession.Load prevents the entity to be cast to Snake and will throw the following exception: System.InvalidCastException :  Message=Unable to cast object of type 'TPHAnimalProxy' to type 'NHibernateChecker.Model.Snake'. Thus we can see that if we lazy load the entity using ISession.Load<TPHAnimal> then we get a TPHAnimalProxy and not a snake. =============================================================== However if do not lazy load the same cast works perfectly fine, this is since we are loading the entity from database and the entity being loaded is not a proxy. Thus the following code does not throw any exceptions, infact the snake variable is not null: var tRepo = new NHibernateReadWriteRepository<TPHAnimal>(); var animal = tRepo.GetByKey(new Guid("602DAB56-D1BD-4ECC-B4BB-1C14BF87F47B"), false); var snake = animal as Snake; if (snake == null) { var snake22 = (Snake) animal; }

    Read the article

  • I need help installing Ubuntu 11.10 to multi-drive system

    - by CookyMonzta
    I have a machine with 3 hard drives; the primary, which is 750GB (drive 0), and 2 others, each of which is 640GB (drives 1 and 2). On the last screen before the actual installation begins, this is how my hard drive configuration looks: /dev/sda [DISK0, 750GB] /dev/sda1 ntfs 104MB [Win7 System Reserved] /dev/sda2 ntfs 499,997MB [Windows 7 Pro] free space 250,052MB [This space intended for Windows 8] /dev/sdb [DISK1, 640GB] /dev/sdb1 ntfs 400,085MB [Windows XP Pro] free space 240,049MB [This space intended for Ubuntu] /dev/sdc [DISK2, 640GB] [This drive intended for various backups] free space 160,033MB /dev/sdc5 ntfs 480,101MB [Acronis Secure Zone] As you can see, I have 3 drives, all SATA. I have Win7 on my first drive (0), WinXP on my second drive (1) and a secure zone for daily backups on my third drive (2). I want to put Ubuntu 11.10 Oneiric Ocelot on the drive that also has XP. I've already used 400GB for XP and I have 240GB remaining, for which, my intention was to create a 4GB swap file and use the rest for Ubuntu itself. This is what my second hard drive looked like, for my intended setup before installation: /dev/sdb /dev/sdb5 swap 4,095MB [Linux swap] /dev/sdb6 ext4 235,951MB [Ubuntu 11.10] Needless to say, this is only the second time I have attempted to install Linux. I managed to get 7.10 Gutsy Gibbon working on an old machine. I have two problems with this installation: Ubuntu asks for a location to install the boot loader (i.e., "Device for Boot Loader Installation"). I already have a boot loader; namely, Acronis OS Selector (from Acronis Disk Director 11). So I decided to put the Ubuntu boot loader in /dev/sdb6 (where I intend to install Ubuntu), to keep it from interfering with my Acronis OS Selector. Once I hit "Install now", I ended up with the following error: "No root file system is defined. Please correct this from the partitioning menu." What am I missing? Did I attempt to put the boot loader in the wrong place? I assume I did, because as I am writing this entry, I am looking at LinuxIdentity.com's Ubuntu 11.04 Natty Narwhal magazine, and I see a screenshot (Figure 7 on Page 13) that implies that the boot loader can be installed anywhere, including the first hard drive (in the MBR, which would obviously force me to reinstall the Acronis OS Selector) or even on a floppy. But why do I get an undefined root file system error? I thought /dev/sdb6 was the root file. Obviously I'm missing something in the installation procedure. Should I try installing it in Windows using the WUBI Installer? I assume that, if I attempt to install Ubuntu from WinXP (on the second drive), it will automatically install Ubuntu on the empty partition alongside XP. But will I have the option of creating a swap partition? And what if the WUBI Installer searches all of my drives and decides to install Ubuntu on my first drive's empty partition (which I have left empty for Win8 upon its release)?

    Read the article

  • 12.04 Unity 3D 80% CPU load with Compiz

    - by user39288
    EDIT : I have been able to to determine that the problem is not compiz, but is actually Xorg. I don't know why, but by quickly maximizing terminal and taking a screenshot with top running before the problem went away I am able to see xorg takes up 72% of cpu, with bamfdaemon taking up 18%, and compiz taking up 14%. Seems the nvidia drivers are to blame, will play more with settings and perhaps do a clean nvidia-current install to try to fix the problem. Having a very annoying problem with high CPU usage. Running 12.04 with latest drivers and nvidia-current installed. Have not had any issues for days, now I have a strange problem. Unity 3d runs great most of the time, 1-2% CPU usage with only transmission running in background. Windows open and close smoothly. However,no matter what programs are open, if I minimize all open programs to the unity bar on the left, my CPU jumps to about 80% and slows down all maximize and minimize effects. Mouse movement stays smooth the whole time, but unity becomes unresponsive for up to 30 seconds at times. Hitting alt + tab to bring up even a single window fixes the problem. The window I bring back up doesn't even have to be maximized to solve the problem. Hitting the super button to bring up the dash makes CPU drop back to idle until I close it, then high CPU usage resumes. Believe the problem is compiz, but even just having only terminal running "top", I have to minimize it to the tray for the problem to show, so I can't see the problem process. I can only tell about the high CPU usage using indicator-sysmonitor. Even tried quitting the indicator, but I can still tell very poor performance with all applications when minimized. Reset compiz back to defaults, tried going to the post-release update nvidia drivers, played with vsync settings in both the nvidia settings and compiz. Even forced refresh rate, but cannot solve the problem. The problem does NOT occur in Unity 2D. Specs are core 2 duo 2.0ghz, 4GB ddr2 ram, 2x 320's HDD in RAID 0, and Nvidia GTX 260M graphics card.

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • How to Add a Taskbar to the Desktop in Ubuntu 14.04

    - by Lori Kaufman
    If you’ve switched to Ubuntu from Windows, it may take some time to get used to the new and different interface. However, you can easily incorporate a familiar Windows feature, the Taskbar, into Ubuntu to make the transition easier. A tool called Tint2 provides a bar at the bottom of the Ubuntu Desktop that resembles the Windows Taskbar. We will show you how to install it and make it start every time you log into Ubuntu. NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise. Press Ctrl + Alt + T to open a Terminal window. To install Tint2, type the following line at the prompt and press Enter. sudo apt-get install tint2 Type your password at the prompt and press Enter. The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter. When the installation has finished, close the Terminal window by typing “exit” at the prompt and pressing Enter. Click the Search button at the top of the Unity bar. Start typing “startup applications” in the Search box. Items that match what you type start displaying below the Search box. When the Startup Applications tool displays, click the icon to open it. On the Startup Applications Preferences window, click Add. On the Add Startup Program dialog box, enter a name for the startup application. This name displays in the list on the Startup Applications Preferences window. Type “tint2” in the Command edit box, enter a description in the Comment edit box, if desired, and click Add. Tint2 is added as a startup program and will start every time you log into Ubuntu. Click Close to close the Startup Applications Preferences window. Log out and log back in to make the Taskbar available on the desktop. You do not need to reboot the computer for this change to take effect. Now, when you minimize a program, an icon for it displays on the Taskbar at the bottom of the screen, just like the Taskbar in Windows. If you decide that you don’t want the Taskbar to display every time you log into Ubuntu, you can uncheck the Tint2 startup program on the Startup Applications Preferences window. You don’t need to delete it from the list.

    Read the article

  • PeopleSoft New Design Solves Navigation Problem

    - by Applications User Experience
    Anna Budovsky, User Experience Principal Designer, Applications User Experience In PeopleSoft we strive to improve User Experience on all levels. Simplifying navigation and streamlining access to the most important pages is always an important goal. No one likes to waste time waiting for pages to load and watching a spinning glass going on and on. Those performance-affecting server trips, page-load waits and just-too-many clicks were complained about for a long time. Something had to be done. A few new designs came in PeopleSoft 9.2 helping users to access their everyday work areas easier and faster. For example, Dashboard and Work Center aggregate most accessed information sections on a single page; Related Information allows users to complete transaction-related-research without interrupting a transaction and Secure Search gets users to a specific page directly. Today we’ll talk about the Actions menu. Most PeopleSoft pages are shared between individual products and product lines. It means changing the content on a single page involves Oracle development and quality assurance time for making and testing the changes. In order to streamline the navigation and cut down on accessing PeopleSoft pages one-page-at-a-time, we introduced a new menu design. The new menu allows accessing shared pages without the Oracle development team making any local changes, and it works as an additional one-click-path to specific high-traffic actionable pages. Let’s look at how many steps it took to Change Salary for an employee in HCM 9.1 before: Figure 1. BEFORE: The 6 steps a user would take to Change Salary in PeopleSoft HCM 9.1 In PeopleSoft 9.1 it took 5 steps + page loading time + additional verification time for making sure a correct employee is selected from the table. In PeopleSoft 9.2 it only takes 2 steps. To complete Ad Hoc Change Salary action, the user can start from the HCM Manager's Dashboard, click the Action menu within a table, choose a menu option, and access a correct employee’s details page to take an action. Figure 2. AFTER: The 2 steps a user would take to Change Salary in PeopleSoft HCM 9.2 The new menu is placed on a row level which ensures the user accesses the correct employee’s details page. The Actions menu separates menu options into hierarchical sections which help to scan and access the correct option quickly. The new menu’s small size and its structure enabled users to access high-traffic pages from any page and from any part of the page. No more spinning hourglass, no more multiple pages upload. The flexible design fits anywhere on a page and provides a fast and reliable path to the correct destination within the product. Now users can: Access any target page no matter how far it is buried from the starting point; Reduce navigation and page-load time; Improve productivity and reduce errors. The new menu design is available and widely used in all PeopleSoft 9.2 product lines.

    Read the article

  • Block-level deduplicating filesystem

    - by James Haigh
    I'm looking for a deduplicating copy-on-write filesystem solution for general user data such as /home and backups of it. It should use online/inline/synchronous deduplication at the block-level using secure hashing (for negligible chance of collisions) such as SHA256 or TTH. Duplicate blocks need not even touch the disk. The idea is that I should be able to just copy /home/<user> to an external HDD with the same such filesystem to do a backup. Simple. No messing around with incremental backups where corruption to any of the snapshots will nearly always break all later snapshots, and no need to use a specific tool to delete or 'checkout' a snapshot. Everything should simply be done from the file browser without worry. Can you imagine how easy this would be? I'd never have to think twice about backing-up again! I don't mind a performance hit, reliability is the main concern. Although, with specific implementations of cp, mv and scp, and a file browser plugin, these operations would be very fast, especially when there is a lot of duplication as they would only need to transfer the absent blocks. Accidentally using conventional copy tools that do not integrate with the FS would merely take longer, waste some bandwidth when copying remotely and waste some CPU, as the duplicate data would be re-read, re-transferred and re-hashed (although nothing would be re-written), but would absolutely not corrupt anything. (Some filesharing software may also be able to benefit by integrating with the FS.) So what's the best way of doing this? I've looked at some options: lessfs - Looks unmaintained. Any good? [Opendedup/SDFS][3] - Java? Could I use this on Android?! What does [SDFS][4] stand for? [Btrfs][5] - Some patches floating around on mailing list archives, but no real support. [ZFS][6] - Hopefully they'll one day relicense under a true Free/Opensource GPL-compatible licence. Also, 2 years ago I had a go at an attempt in Python using Fuse at the file-level to be used over the top of a typical solid FS such as EXT4, but I found Fuse for Python underdocumented and didn't manage to implement all of the system calls. My first post here, so I can't post more than 2 links until I get over 10 rep: [3]: http://www.opendedup.org/ [4]: https://en.wikipedia.org/w/index.php?title=SDFS&action=edit&redlink=1 [5]: https://en.wikipedia.org/wiki/Btrfs#Features [6]: https://en.wikipedia.org/wiki/ZFS#Linux

    Read the article

  • Setting MTU on Exalogic

    - by csoto
    For many reasons, a system administrator may want to change the MTU settings of a server. But in a system like Exalogic which contains lots of interconnected nodes and other various components, it's important to understand how this applies to the different networks. For example, when bringing up bonding of InfiniBand an error like the following may be thrown: Bringing up interface bond1: SIOCSIFMTU: Invalid argument Both scripts ifcfg-ib0 and ifcfg-ib1 (from the /etc/sysconfig/network-scripts/ direectory) have MTU set to 65500, which is a valid MTU value only if all IPoIB slaves operate in connected mode and are configured with the same value, so the line below must be added to both network scripts and then restart the network: CONNECTED_MODE=yes By the way, an error of the form “SIOCSIFMTU: Invalid argument” indicates that the requested MTU was rejected by the kernel. Typically this would be due to it exceeding the maximum value supported by the interface hardware. In that case you must either reduce the MTU to a value that is supported or obtain more capable hardware. This problem has been seen when trying to modify the MTU using the ifconfig command, like the output of the example below: [root@elxxcnxx ~]# ifconfig ib1 mtu 65520 SIOCSIFMTU: Invalid argument It's important to insist that in most cases the nodes must be rebooted after the MTU size has been changed. Although in some circumstances it may work without a reboot, it is not how it is typically documented. Now, in order to achieve a reduced memory consumption and improve performance for network traffic received on IPoIB related interfaces, it is recommend to reduce the MTU value in interface configuration files for IPoIB related bonds from 65520 to 64000. The change needs to be made to interface configuration files under the /etc/sysconfig/network-scripts directory and applies to the interface configuration files for bonds over IPoIB related slave devices, for example /etc/sysconfig/network-scripts/ifcfg-bond1. However, keep in mind that the numeric portion of the interface filenames that corresponding to IPoIB interfaces is expected to vary across compute nodes and vServers and so cannot be relied upon to identify which interface files are for bonds are over IPoIB rather than EoIB related slave interfaces. To fix these MTU values to the recommended settings, there are very useful instructions and a script on the MOS Note 1624434.1, and it's applicable physical and virtual configurations of Exalogic. Regarding the recommended MTU value for EoIB related interfaces, its maximum appropriate value is 1500. If for some reason a vServer has been created with a higher value (set on the /etc/sysconfig/network-scripts/ifcfg-bond0 file), then it must be fixed. An error like the following could be thrown under this circumstance: [root@vServer ~]# service network restart ... Bringing up interface bond0:  SIOCSIFMTU: Invalid argument Also an error like the one below can be seen on the /var/log/messages file of the vServer: kernel: T5074835532 [mlx4_vnic] eth1:vnic_change_mtu:360: failed: new_mtu 64000 2026 The MOS Note 1611657.1 is very useful for this purpose.

    Read the article

  • How do I set image position in conky

    - by realitygenerator
    I copied and modified an existing .conkyrc file from the ubuntu forum and I'm trying to place the LinuxMint logo in a specific position Below are my conkyrc file and the screenshot # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type desktop own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont URW Gothic:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left #alignment top_right #alignment bottom_left #alignment bottom_right. alignment top_middle # Gap between borders of screen and text gap_x 10 gap_y 10 #Display temp in fahrenheit temperature_unit fahrenheit #Choose which screen on which to display # stuff after 'TEXT' will be formatted on screen TEXT $color ${color green}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine LinuxMint 11 "Katya" (Oneric) ${image ~/Conky/Logo_Linux_Mint.png -s 80x60 -f 86400} ${color green}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${hwmon temp 1} $cpubar ${cpugraph 000000 ffffff} NAME PID CPU% MEM% ${top name 1} ${top pid 1} ${top cpu 1} ${top mem 1} ${top name 2} ${top pid 2} ${top cpu 2} ${top mem 2} ${top name 3} ${top pid 3} ${top cpu 3} ${top mem 3} ${top name 4} ${top pid 4} ${top cpu 4} ${top mem 4} ${color green}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Root: ${fs_free_perc /}% ${fs_bar 6 /}$color hda1: ${fs_free_perc /media/sda1}% ${fs_bar 6 /media/sda1}$color ${color green}NETWORK (${addr eth1}) ${hr 2}$color Down: $color${downspeed eth1} k/s ${alignr}Up: ${upspeed eth1} k/s ${downspeedgraph eth1 25,140 000000 ff0000} ${alignr}${upspeedgraph eth1 25,140 000000 00ff00}$color Total: ${totaldown eth1} ${alignr}Total: ${totalup eth1} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color green}LOGGING ${hr 2}$color ${execi 30 tail -n3 /var/log/messages | awk '{print " ",$5,$6,$7,$8,$9,$10}' | fold -w50} ${color green}FORTUNE ${hr 2}$color ${execi 120 fortune -s | fold -w50} I want to put the mint logo right after the word (oneric). Any help would be greatly appreciated.

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • BI&EPM in Focus November 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE IBM is Embracing Oracle Exalytics: The Velocity of Thought and Action (link) Customers Ambulance Victoria, Australia, uses analytics and modelling to serve the expanding needs of a growing population (link) Cablemás Selects Oracle to Speed Customer Data Insights (link) National Instruments Introduces New Business Intelligence Solutions—Runs Reports up to 30x Faster, and Expands Customer Insight (link) FLSmidth Ensures Precise, Transparent Financial Reporting at All Business Levels, Reduces Financial Consolidation Time by up to 40% (link) Enterprise Performance Management Partner Edgewater Ranzal Webinar Series Mitigate Your Biggest EPM Project Risk - Thursday, 21st November - Register here:  4.00 GMT Capital Planning in the Energy Industry - Tuesday, 26th November - Register here:  4.00 GMT Driving Value in the Retail Industry Using Hyperion Strategic Finance (HSF)  - Tuesday, 10th December - Register here:  7.00 GMT Dec 11, Look Smarter Selling Hyperion Profitability & Cost Management (HPCM) Webcast (link) EPM System Infrastructure Tips & Tricks Support: November EPM Patch Set Updates released Business Analytics Monthly Index - October 2013 Hyperion Smart View Assistance with OBIEE 11.1.1.7 Hyperion Disclosure Management 11.1.2.3.330 PSU 17444967 [Doc ID 1592645.1] Hyperion Financial Close Management (FCM) 11.1.2.3.100 PSU 16989110 [Doc ID 1592644.1] Business  Intelligence BI-Apps Whitepaper: Packaged Analytic Applications: Accelerating Time and Value By Wayne Eckerson (link) BI Apps Blog: A Closer Look at Oracle Price Analytics (link) Blog: Taking Your Business Scorecard Golfing (link) Blog: Practical Uses of Business Scorecards, from Company-Wide to Process Specific (link) Nov 19, Big Data at Work Series: How Delphi Harnesses Big Data to Improve Warranty Response & Customer Satisfaction (link) Rittman Mead Blog: Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration Support: OBIEE Suite Bundle Patches (understand OBIEE naming convention) [Doc ID 1591422.1] Support Blog: Java update alert: Essbase Administration Services (EAS) 11.1.2.3 (link) Support Blog: OBIEE 11.1.1.7.131017 now available (link) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Seven Accounting Changes for 2010

    - by Theresa Hickman
    I read a very interesting article called Seven Accounting Changes That Will Affect Your 2010 Annual Report from SmartPros that nicely summarized how 2010 annual financial statements will be impacted.  Here’s a Reader’s Digest version of the changes: 1.  Changes to revenue recognition if you sell bundled products with multiple deliverables: Old Rule: You needed to objectively establish the “fair value” of each bundled item. So if you sold a dishwasher plus installation and could not establish the fair value of the installation, you might have to delay recognizing revenue of the dishwasher days or weeks later until it was installed. New Rule (ASU 2009-13): “Objective” proof of each service or good is no longer required; you can simply estimate the selling price of the installation and warranty. So the dishwasher vendor can recognize the dishwasher revenue immediately at the point of sale without waiting a few weeks for the installation. Then they can recognize the estimated value of the installation after it is complete. 2.  Changes to revenue recognition for devices with embedded software: Old Rule: Hardware devices with embedded software, such as the iPhone, had to follow stringent software revrec rules. This forced Apple to recognize iPhone revenues over two years, the period of time that software updates were provided. New Rule (ASU 2009-14): Software revrec rules no longer apply to these devices with embedded software; these devices can now follow ASU 2009-13. This allows vendors, such as Apple, to recognize revenue sooner. 3.  Fair value disclosures: Companies (both public and private) now need to spend extra time gathering, summarizing, and disclosing information about items measured at fair value, such as significant transfers in and out of Level 1(quoted market price), Level 2 (valuation based on observable markets), and Level 3 (valuations based on internal information). 4.  Consolidation of variable interest entities (a.k.a special purpose entities): Consolidation rules for variable interest entities now require a qualitative, not quantitative, analysis to determine the primary beneficiary. Instead of simply looking at the percentage of voting interests, the primary beneficiary could have less than the majority interests as long as it has the power to direct the activities and absorb any losses.  5.  XBRL: Starting in June 2011, all U.S. public companies are required to file financial statements to the SEC using XBRL. Note: Oracle supports XBRL reporting. 6.  Non-GAAP financial disclosures: Companies that report non-GAAP measures of performance, such as EBITDA in SEC filings, have more flexibility.  The new interpretations can be found here: http://www.sec.gov/divisions/corpfin/guidance/nongaapinterp.htm.  7.  Loss contingencies disclosures: Companies should expect additional scrutiny of their loss disclosures, such as those from litigation losses, in their annual financial statements. The SEC wants more disclosures about loss contingencies sooner instead of after the cases are settled.

    Read the article

  • Proven Approach to Financial Progress Using Modern Best Practice

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE by Larry Simcox, Sr. Director, Oracle Midsize Programs Top performing organizations generate 25 percent higher profit margins and grow at twice the rate of their competitors. How do they do it? Recently, Dr. Stephen G. Timme, President of FinListics Solutions and Adjunct Professor at the Georgia Institute of Technology, joined me on a webcast to answer that question. I've know Dr. Timme since my days at G-log when we worked together to help customers determine the ROI of transportation management solutions. We were also joined by Steve Cox, Vice President of Oracle Midsize Programs, who recently published an Oracle E-book, "Modern Best Practice Explained". In this webcast, Cox provides his perspective on how best performing companies are moving from best practice to modern best practice.  Watch the webcast replay and you'll learn about the easy to follow, top down approach to: Identify processes that should be targeted for improvement Leverage a modern best practice maturity model to start a path to progress Link financial performance gaps to operational KPIs Improve cash flow by benchmarking key financial metrics Develop intelligent estimates of achievable cash flow benefits Click HERE to watch a replay of the webcast. You might also be interested in the following: Video: Modern Best Practices Defined  AppCast: Modern Best Practices for Growing Companies Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Larry Simcox Senior Director, Oracle Midsize Programs responsible for supporting and creating marketing content ,communications, sales and partner program support for Oracle's go to market activities for midsize companies. I have over 17 years experience helping customers identify the value and ROI from their IT investment. I live in Charlotte NC with my family and my dog Dingo. The views expressed here are my own, and not necessarily those of Oracle. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Live-Ubuntu 12.04 ran fine, now stopped booting!

    - by user89743
    I've seen similar problems to this several times in the forum, but mine is a bit different, so the other posts I saw were no help to me. When I boot Ubuntu 12.04 64-bit from live-SD-card (3GB persistence) I suddenly get this error: (initramfs) mount: mounting /dev/loop0 on //filesystem.squashfs failed: Invalid argument Can not mount /dev/loop/0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs (it says I can type "help" for commands, but I don't know anything about how to go from there, totally new to linux) The reason I say my case is different is because my Ubuntu worked fine for over a week, even pretty fast, and now this problem happened. Before that I used to run my live ubuntu from USB sticks but that was slower (especially when booting which took 15 minutes from USB stick!). Also I kept getting the same above problem after a while when booting and had to re-create a live USB linux several times. Installing on harddrive is not an option because my harddrive has physical damage and getting a replacement will take a while, therefore I can only use live-USB or live-SD-card Ubuntu. As I said I used Ubuntu without problems for more than a week, before that as well for several weeks on USB sticks, but the above problem occured sooner or later. This time I paid attention to when it happened: I was rebooting my computer (HP 620 laptop, 4 GB RAM, 64 bit system) from SD flash card and when I was booting I selected F6 and then the first option "no acpi" or something like that...I had used it before and noticed it slowed down the time it took Linux to use. This time it caused this error. Now even when I boot normally/default I get this error. Now I'm accessing Ubuntu from my USB stick without persistence file, when I check my SD card, all the files mentioned in the error message are there and the filesystem.squashfs is 691.2 MB so nothing seems to have been deleted by accident. (I have already made many changes/downloaded programs to my SD card persistent Ubuntu and would hope to loose them, since downloading is expensive for me, and since the problem seems to re-occur...) Can anyone help me, preferably without having to create another startup disk on my SD card? I'm totally new to this. Sorry for the long posts, just didn't know what info is relevant and what isnt! Kon

    Read the article

  • My Doors - Why Standards Matter to Business

    - by [email protected]
    By Brian Dayton on April 8, 2010 9:27 PM "Standards save money." "Standards accelerate projects." "Standards make better solutions." What do these statements mean to you? You buy technology solutions like Oracle Applications but you're a business person--trying to close the quarter, get performance reviews processed, negotiate a new sourcing contract, etc. When "standards" come up in presentations and discussions do you: - Nod your head politely - Tune out and check your smart phone - Turn to your IT counterpart and say "Bob's all over this standards thing, right Bob?" Here's why standards matter. My wife wants new external doors downstairs, ones that would get more light into the rooms. Am I OK with that? "Uhh, sure...it's a little dark in the kitchen." - 24 hours ago - wife calls to tell me that she's going to the hardware store and may look at doors - 20 hours ago - wife pulls into driveway, informs me that two doors are in the back of her station wagon, ready for me to carry - 19 hours ago - I re-discovered the fact that it's not fun to carry a solid wood door by myself - 5 hours ago - Local handyman, who was at our house anyway, tells me that the doors we bought will likely cost 2-3x the material cost in installation time and labor...the doors are standard but our doorways aren't We could have done more research. I could be more handy. Sure. But the fact is, my 1951 house wasn't built with me in mind. They built what worked and called it a day. The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different. If you're going to use your processes and technology to differentiate your business you should have at least a working knowledge of: - How standards can benefit your business - Your IT organization's philosophy around standards - Your vendor's track-record around standards...and watch for those who pay lip-service to standards but don't follow through The rallying cry in most IT organizations today is "learn more about the business, drop the acronyms." I'm not advocating that you go out and learn how to code in Java. But I do believe it will help your business and your decision-making process if you meet IT ½...even ¼ of the way there. Epilogue: The door project has been put on hold and yours truly has to return the doors to the hardware store tomorrow.

    Read the article

  • ArchBeat Link-o-Rama for 10-24-2012

    - by Bob Rhubart
    Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company's position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as "TEN." My score? Never mind. Book: DevOps for Developers | The Java Source The subject of DevOps has come up in a couple of recent OTN ArchBeat Podcasts, so it's somewhat serendipitous that Tori Weildt's recent blog post offers an overview of Java Champion Michael Hutterman's new book, DevOps for Developers, now available from Apress. Bring Your Own Device (BYOD) : Context is everything… | The ORACLE-BASE Blog BOYD is a factor in the evolution of IT, but in what context? "The real IT work in companies is still being done on PCs," says Oracle ACE Director Tim Hall. "Yes, you can use a cloud service on your phone, but look around the office and you will see those cloud services are actually being used by people on PCs." Oracle in the Cloud: Oracle EBusiness Suite sizing | Tom Laszewski Cloud expert Tom Laszewski shares several technical resources that will be helpful for sizing of Oracle EBusiness Suite. Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Author and expert Yuli Vasiliev shows you how take advantage of multiple Oracle WebLogic Server instances grouped into a cluster to maximize scalability and availability. Webcast: Reduce Costs with Oracle's Database Storage Management Watch this! Join Oracle experts Kevin Jernigan and Margaret Hamburger for an interactive webcast in which you'll learn how Oracle's Database Storage Management can reduce storage costs and management complexity while improving query performance to meet service-level agreements and compliance requirements. Event Date: Tuesday, November 6, 2012 Event Time: 10 a.m. PT/1 p.m. ET Thought for the Day "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves." — Alan Kay Source: softwarequotes.com

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • What to answer to a customer who asks which one of two equivalent technologies must be used?

    - by MainMa
    As a freelancer, I am often asked by my customers what they must choose between similar elements, neither of which being better than another. Examples: “Do I need my e-commerce website be in PHP or ASP.NET?” “Do I need to host this ordinary web service in Cloud or use an ordinary hosting service?” “Which one is better for my new website: MySQL or Oracle?” etc. There is maybe at most 1% of cases where the choice is relevant, and there is a real, objective reason to use one over another, based on the precise metrics and studies. In all other cases, it doesn't matter at all. It is totally, completely irrelevant, either because there are no implications¹, or because those implications are too small to be taken in account², or, finally, because it's impossible to predict those implications³. If you know one thing and not another one, the answer to those questions is easy: “You can either write the application in C# or Java, both being probably equivalent in your case. Note that I'm a C# developer, so if you choose Java, I would not be able to work on your project and you would need to find another freelancer.” When you know both technologies, you can't answer that. In this case, how to explain to the customer that the question he asks is subject to flamewar and has no real consequences on his project? In other words, how to explain that you've chosen to use one technology rather than an equivalent one for the reasons related to human resources, without giving the impression to be unprofessional or to not care about the project? ¹ Example: Is MySQL better (worse?), performance-wise, compared to Oracle, for a personal website which will be accessed by, oh, let's be optimistic, two people per day? ² Example: for a given project, I was asked to asset if Windows Azure hosting would be cheaper than the hosting of the same application on a well-known ASP.NET hosting provider. The cost revealed to be exactly the same. ³ Example: your customer have an idea of a future application (the idea itself being extremely vague). There is no business plan, no requirements, nothing at all. Just an idea. You are asked if Java is better than C# for this app. What do you answer?

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

< Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >