Search Results

Search found 28682 results on 1148 pages for 'drop down menu'.

Page 697/1148 | < Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >

  • How to dump remote database without mysqldump?

    - by deceze
    I want to dump the database on my remotely hosted site in regular intervals using a shell script. Unfortunately the server is locked down pretty tight, has no mysqldump installed, binary files can't be executed by normal users/in home directories (so I can't "install" it myself) and the database lives on a separate server, so I can't grab the files directly. The only thing I can do is log into the webserver via SSH and establish a connection to the database server using the mysql command line client. How can I dump the contents to a file a la mysqldump in SQL format? Bonus: If possible, how can I dump the contents directly to my end of the SSH connection?

    Read the article

  • Drobo not mounting. Disk repair doesn't work either.

    - by kohei
    Hi, While transferring data to my 2nd gen Drobo power went out. Now my Drobo is not mounting to my OS X 10.6.3 I have tried Disk Repair and this error message appears: Verify and Repair volume “disk1s2” Checking Journaled HFS Plus volume. Invalid key length Invalid record count Catalog file entry not found for extent The volume could not be verified completely. Volume repair complete.Updating boot support partitions for the volume as required.Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files. I tried DiskWarrior too but it doesn't work either. It gives me that I need more memory to continue and software shuts down. Any one know solution to this one?

    Read the article

  • macbook pro 13"

    - by LINUX
    Hej Bytte nyligen HD 250GB till 640GB 7200rpm nu har jag bara ca.5 tim battery tid ... och HD snurrar hela tiden. hur kan jag sova ner Hd efter t.ex. 5 minuter ? Tack på förhand. English Translation: I recently switched from a 250GB to 640GB 7200rpm hard drive. Now I only have a 5hr battery life and the hard drive is spinning constantly. How can I spin down the hard drive for, for example, 5 minutes? Thanks in advance.

    Read the article

  • Help to calculate hours and minutes between two time periods in Excel 2007

    - by Mestika
    Hi, I’m working on a very simple timesheet for my work in Excel 2007 but have ran into trouble about calculate the hours and minutes between two time periods. I have a field: timestart which could be for example: 08:30 Then I have a field timestop which could be for example: 12:30 I can easy calculate the result myself which is 4 hours but how do I create a “total” table all the way down the cell that calculate the hours spend on each entry? I’ve tried to play around with the time settings but it just give me wrong numbers each time. Sincerely Mestika

    Read the article

  • How to allow IAM users to setup their own virtual MFA devices

    - by Ali
    I want to let my IAM users to setup their own MFA devices, through the console, is there a single policy that I can use to achieve this? So far I can achieve this through a number of IAM policies, letting them list all mfa devices and list users (so that they can find themselves in the IAM console and ... I am basically looking for a more straight forward way of controlling this. I should add that my IAM users are trusted users, so I don't have to (although it will be quite nice) lock them down to the minimum possible, so if they can see a list of all users that is ok.

    Read the article

  • Mouse cursor in Terminal?

    - by marienbad
    I use Mac OS X Terminal.app. But the answer to this would probably apply to any typical UNIX-ish terminal emulator in a graphical environment. Question -- what do I do to: Use my mouse cursor to click on a character-position in the current line of the terminal, and have the terminal's cursor jump to that spot? Typically, you have to hold down an arrow key to move to the correct cursor position. If you're pasting in a long string of text at the shell prompt, or working in an editor like VI, this can take a long time. I know editors have other navigation keys like jump-words, but I like my mouse cursor.

    Read the article

  • Lot of Multicast traffic on LAN

    - by Nel
    Recently the whole network at work is being hit by multicast traffic originating on the LAN itself. I did some investigating and the service which seems to be responsible is ws-discovery. I have attached a screenshot of wireshark capturing the traffic. I have tried shutting down the source machine from which it was originating, but the multicast traffic still seems to be present in the network. My network topology 2 subnets - 10.10.10.0/24 and 10.20.10.0/24. Gateway is a debian system. We have 3 switches for 3 floors. They are all unmanaged Dlink 24-port switches. Multicast blocking at switch level is out of the question. Any solutions? :(

    Read the article

  • Stackify Gives Devs a Crack at the Production Server

    - by Matt Watson
    Originally published on SDTimes.com on 7/9/2012 by David Rubinstein.It was one of those interviews where you get finished talking about a company’s product, and you wonder aloud, “Well, THAT makes sense! Why hasn’t anyone thought of that before?” Matt Watson, CEO of Kansas City, Mo.-based startup Stackify, was telling me that the 10-person company is getting ready to launch its product in August (it’s in beta now) that will give developers an app-centric look into production servers so they can support and troubleshoot apps and fix bugs. Of course, this hasn’t happened in the past because of the security concerns of IT administrators, and a decided lack of expertise on the part of developers. Stackify installs on a server and acts like a proxy for developers, collecting data about the environment, discovering all the applications, scanning for config file changes, and doing server monitoring. “We become the central point that developers can see everything they need to know about their applications,” he said. “Developers can look at the files that are deployed, and query databases in a safe way.”  In his words:“The big thing we’re hoping is just giving them (developers) visibility. Most companies want to hire the junior developers that they pay $50,000 a year right out of college to do application support and troubleshooting and fix bugs, but those people don’t have access to production servers to troubleshoot. It becomes very difficult for them to do their job, so they end up spending all of their day bugging the senior developers, the managers or the system administrators to track down this stuff, which creates a huge bottleneck. And so what we can do is give that visibility to those lower-level people so that they can do this work and free up the higher-level people so they can be working on the next big thing.”Stackify itself might just prove to be the next big thing.

    Read the article

  • Recommended website performance monitoring services? [closed]

    - by Dennis G.
    I'm looking for a good performance monitoring service for websites. I know about some of the available general monitoring services that check for uptime and notify you about unavailable services. But I'm specifically looking for a service with an emphasis on performance. I.e., I would like to see reports with detailed performance statistics from multiple locations world-wide, with a break-down on how long it took to fetch the different website resources, including third-party scripts such as Google Analytics and so on (the report should contain similar details such as the FireBug Net tab). Are there any such services and if so, which one is the best?

    Read the article

  • IP telephony open source systems

    - by danke
    I'm trying to pick an IP telephony technology to learn. I heard of Asterisk, trixbox, freePBX, and my head was already spinning being not sure what to learn. Then I came across this article listing some more like Kamailio, Yate, CallWeaver, FreeSWITCH, SipXecs and now my head REALLY is spinning http://www.cio.com.au/article/323016/five_open_source_ip_telephony_projects_watch . Can someone give me a run down of how all these technologies tie together? What is the trend now, because I'd like to start learning. Note: Anyone please re-tag this question if you know better, because I'm new to this field and not sure about the best tags.

    Read the article

  • How can I get something like Nautilus's "spatial" behaviour in 11.10 and later?

    - by Dylan McCall
    Until recently, Nautilus had an optional "spatial" mode as an alternative to its usual browser mode. Users could enable it by opening Nautilus's preferences and selecting "Open each folder in its own window." Choosing this option resulted in a stripped down interface, and folders would always open in the same place on the screen. It looked something like this: That checkbox is still there, but it doesn't do the same thing: while every folder opens in a new window, those windows are all the same size and positioned the same way. Nautilus doesn't restore the size and position of each folder, which is what spatial mode was all about. This difference really shows if someone opens a folder in a new window, and then opens the same folder again. Nautilus will create another window instead of focusing the existing window. I help someone who is currently using Ubuntu 10.04, and he will be upgrading to 12.04 in the future. He is very comfortable with Nautilus's spatial behaviour and I think he would be disappointed to lose it. Is there an alternative file manager, or perhaps some less obvious option, that will give him an interface similar to the one he is comfortable with?

    Read the article

  • Changing Career to Game Development

    - by Don Carleone
    I m enthusiastic about and ready to shifting my career to Game Development sector, but before that I wonder some situations, I m now working as Senior .net programmer, i can only write code in c# right now, but i started to learn c++, I m computer engineer so before I know how to write in C but I didnt work with big projects, I wrote "Game of Life" before with C and used only Linked List DataStructure becouse of pushed my limits. But now I m thinking to shift Game Development, I love to play Console Games, I respect people who works about that business. But I just wonder, I see a lot of great developers who write codes with C++ and I ask myself that guys dont think to join Game Industry so why I think I can join! is that True? I dont live in USA or big country like. I live in a poor country, and here is no any Game Development Company, so I have to move to USA for working that job. So can you tell me if I start to learn something (c++,game enginees,physic enginees,3d math etc.) right now and working my usual job, after 7-8 month is it good time to move and finding a job about Game development in USA as junior game developer? is that possible? or is this just a dream? I realy need your advices. You can give down vote about that no problem, at least one advice can help me in my life.

    Read the article

  • Memory concerns while plotting escape from DLL Hell in Delphi

    - by Peter Turner
    I work on a program with about 50 DLLs that are loaded from one executable, it's an old organically grown program where the only rationale for creating a new DLL is that one previously didn't exist to fill a given need. (and namespaces didn't exist in Delphi so it never crossed our mind to make dll1.main.pas, dll2.main.pas or something even more unique) What we want to do is consolidate all these DLLs into one executable, since none of them are used out of the program, there shouldn't be much of a problem. The concern my boss has is that if we did this, the memory overhead for terminal server clients would go through the roof. So, I've stepped through enough initialization code to know that lots of stuff is done every time a DLL is loaded in to memory, but say I've got a project with about 4000 files, and 50 dlls, 10 of which are probably utilized by any one user in any one session of the program. The 50 dlls are about 2/3rds form files, if not more, but beyond that there's not a lot of other resources being loaded (only a few embedded pictures, icons, cursors, etc..). If I loaded all these files in to memory, how much memory is used per unit? how much is used per class? How do I keep the overhead down? and what is the biggest project one can reasonably expect to build with Delphi? This tidbit won't help answering, but I think it might clarify what my boss is worried about, we currently start our program at about 18megs, normal working conditions are usually less than 40 megs, he thinks it could climb as high as 120 megs.

    Read the article

  • Steam on 64-bit 14.04: need some help, missing a few 32-bit libs

    - by YellowShark
    Steam says I'm missing the following libs, I'm hoping someone can help me get things in better shape: xyz@abc:~$ STEAM_RUNTIME=0 steam Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is disabled by the user Error: You are missing the following 32-bit libraries, and Steam may not run: libpangoft2-1.0.so.0 libpango-1.0.so.0 libgtk-x11-2.0.so.0 libgdk_pixbuf-2.0.so.0 Installing breakpad exception handler for appid(steam)/version(1401381906_client) Installing breakpad exception handler for appid(steam)/version(1401381906_client) [2014-06-11 20:45:39] Startup - updater built May 29 2014 09:19:23 [2014-06-11 20:45:39] Verifying installation... [2014-06-11 20:45:39] Verification complete [2014-06-11 20:45:42] Shutdown I tried installing the following i386 packages: libpango-1.0-0:i386, libpangoft2-1.0-0:i386, and libgdk-pixbuf2.0-0:i386, and symlinking the .so files (from usr/lib/i386whatever../) into the ~/.local/share/Steam/ubuntu12_32/ folder, but wasn't able to find the right match for the gtk-x11 lib, and ultimately would up with a different, but still non-working situation. So I've back-tracked to this point, and have removed those i386 packages for now. It's worth noting thatSteam runs if I don't use STEAM_RUNTIME=0. Also, Steam seemed to "recognize" the i386 version of the libpango & libpangoft2 libs after I symlinked them into place, during the course of my troubleshooting; when I would rerun STEAM_RUNTIME=0 steam, it wouldn't list those two items as missing anymore. Instead though, I had a bunch of gtk-related issues, something about overlay-scrollbar not available, as well as warnings that it can't find the murrine engine... a whole bunch of stuff that sounded like I'd gone too far down the wrong path. Anyhow, any help sorting this out would be appreciated, and thanks!

    Read the article

  • C#: Handling Notifications: inheritance, events, or delegates?

    - by James Michael Hare
    Often times as developers we have to design a class where we get notification when certain things happen. In older object-oriented code this would often be implemented by overriding methods -- with events, delegates, and interfaces, however, we have far more elegant options. So, when should you use each of these methods and what are their strengths and weaknesses? Now, for the purposes of this article when I say notification, I'm just talking about ways for a class to let a user know that something has occurred. This can be through any programmatic means such as inheritance, events, delegates, etc. So let's build some context. I'm sitting here thinking about a provider neutral messaging layer for the place I work, and I got to the point where I needed to design the message subscriber which will receive messages from the message bus. Basically, what we want is to be able to create a message listener and have it be called whenever a new message arrives. Now, back before the flood we would have done this via inheritance and an abstract class: 1:  2: // using inheritance - omitting argument null checks and halt logic 3: public abstract class MessageListener 4: { 5: private ISubscriber _subscriber; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber) 11: { 12: _subscriber = subscriber; 13: _messageThread = new Thread(MessageLoop); 14: _messageThread.Start(); 15: } 16:  17: // user will override this to process their messages 18: protected abstract void OnMessageReceived(Message msg); 19:  20: // handle the looping in the thread 21: private void MessageLoop() 22: { 23: while(!_isHalted) 24: { 25: // as long as processing, wait 1 second for message 26: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 27: if(msg != null) 28: { 29: OnMessageReceived(msg); 30: } 31: } 32: } 33: ... 34: } It seems so odd to write this kind of code now. Does it feel odd to you? Maybe it's just because I've gotten so used to delegation that I really don't like the feel of this. To me it is akin to saying that if I want to drive my car I need to derive a new instance of it just to put myself in the driver's seat. And yet, unquestionably, five years ago I would have probably written the code as you see above. To me, inheritance is a flawed approach for notifications due to several reasons: Inheritance is one of the HIGHEST forms of coupling. You can't seal the listener class because it depends on sub-classing to work. Because C# does not allow multiple-inheritance, I've spent my one inheritance implementing this class. Every time you need to listen to a bus, you have to derive a class which leads to lots of trivial sub-classes. The act of consuming a message should be a separate responsibility than the act of listening for a message (SRP). Inheritance is such a strong statement (this IS-A that) that it should only be used in building type hierarchies and not for overriding use-specific behaviors and notifications. Chances are, if a class needs to be inherited to be used, it most likely is not designed as well as it could be in today's modern programming languages. So lets look at the other tools available to us for getting notified instead. Here's a few other choices to consider. Have the listener expose a MessageReceived event. Have the listener accept a new IMessageHandler interface instance. Have the listener accept an Action<Message> delegate. Really, all of these are different forms of delegation. Now, .NET events are a bit heavier than the other types of delegates in terms of run-time execution, but they are a great way to allow others using your class to subscribe to your events: 1: // using event - ommiting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private bool _isHalted = false; 6: private Thread _messageThread; 7:  8: // assign the subscriber and start the messaging loop 9: public MessageListener(ISubscriber subscriber) 10: { 11: _subscriber = subscriber; 12: _messageThread = new Thread(MessageLoop); 13: _messageThread.Start(); 14: } 15:  16: // user will override this to process their messages 17: public event Action<Message> MessageReceived; 18:  19: // handle the looping in the thread 20: private void MessageLoop() 21: { 22: while(!_isHalted) 23: { 24: // as long as processing, wait 1 second for message 25: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 26: if(msg != null && MessageReceived != null) 27: { 28: MessageReceived(msg); 29: } 30: } 31: } 32: } Note, now we can seal the class to avoid changes and the user just needs to provide a message handling method: 1: theListener.MessageReceived += CustomReceiveMethod; However, personally I don't think events hold up as well in this case because events are largely optional. To me, what is the point of a listener if you create one with no event listeners? So in my mind, use events when handling the notification is optional. So how about the delegation via interface? I personally like this method quite a bit. Basically what it does is similar to inheritance method mentioned first, but better because it makes it easy to split the part of the class that doesn't change (the base listener behavior) from the part that does change (the user-specified action after receiving a message). So assuming we had an interface like: 1: public interface IMessageHandler 2: { 3: void OnMessageReceived(Message receivedMessage); 4: } Our listener would look like this: 1: // using delegation via interface - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private IMessageHandler _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler.OnMessageReceived(msg); 28: } 29: } 30: } 31: } And they would call it by creating a class that implements IMessageHandler and pass that instance into the constructor of the listener. I like that this alleviates the issues of inheritance and essentially forces you to provide a handler (as opposed to events) on construction. Well, this is good, but personally I think we could go one step further. While I like this better than events or inheritance, it still forces you to implement a specific method name. What if that name collides? Furthermore if you have lots of these you end up either with large classes inheriting multiple interfaces to implement one method, or lots of small classes. Also, if you had one class that wanted to manage messages from two different subscribers differently, it wouldn't be able to because the interface can't be overloaded. This brings me to using delegates directly. In general, every time I think about creating an interface for something, and if that interface contains only one method, I start thinking a delegate is a better approach. Now, that said delegates don't accomplish everything an interface can. Obviously having the interface allows you to refer to the classes that implement the interface which can be very handy. In this case, though, really all you want is a method to handle the messages. So let's look at a method delegate: 1: // using delegation via delegate - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler(msg); 28: } 29: } 30: } 31: } Here the MessageListener now takes an Action<Message>.  For those of you unfamiliar with the pre-defined delegate types in .NET, that is a method with the signature: void SomeMethodName(Message). The great thing about delegates is it gives you a lot of power. You could create an anonymous delegate, a lambda, or specify any other method as long as it satisfies the Action<Message> signature. This way, you don't need to define an arbitrary helper class or name the method a specific thing. Incidentally, we could combine both the interface and delegate approach to allow maximum flexibility. Doing this, the user could either pass in a delegate, or specify a delegate interface: 1: // using delegation - give users choice of interface or delegate 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // passes the interface method as a delegate using method group 19: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 20: : this(subscriber, handler.OnMessageReceived) 21: { 22: } 23:  24: // handle the looping in the thread 25: private void MessageLoop() 26: { 27: while(!_isHalted) 28: { 29: // as long as processing, wait 1 second for message 30: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 31: if(msg != null) 32: { 33: _handler(msg); 34: } 35: } 36: } 37: } } This is the method I tend to prefer because it allows the user of the class to choose which method works best for them. You may be curious about the actual performance of these different methods. 1: Enter iterations: 2: 1000000 3:  4: Inheritance took 4 ms. 5: Events took 7 ms. 6: Interface delegation took 4 ms. 7: Lambda delegate took 5 ms. Before you get too caught up in the numbers, however, keep in mind that this is performance over over 1,000,000 iterations. Since they are all < 10 ms which boils down to fractions of a micro-second per iteration so really any of them are a fine choice performance wise. As such, I think the choice of what to do really boils down to what you're trying to do. Here's my guidelines: Inheritance should be used only when defining a collection of related types with implementation specific behaviors, it should not be used as a hook for users to add their own functionality. Events should be used when subscription is optional or multi-cast is desired. Interface delegation should be used when you wish to refer to implementing classes by the interface type or if the type requires several methods to be implemented. Delegate method delegation should be used when you only need to provide one method and do not need to refer to implementers by the interface name.

    Read the article

  • Website restyle, SEO migration plan?

    - by Goboozo
    I am currently in a project for one of my biggest clients. We have built a website that will -replace- the old website. When it comes to actual content its is largely the same. However, the presentation of the content has changed drastically. From our point of view much more user-friendly (main reason to update the site). Now, since the sites presentation has changed we have some major changes in: HTML & CSS: To change the presentation of the content URL's: To make them better understandable (301 redirects have been taken care of and are in place) Breadcrumbs: To enhance the navigation (we have made the breadcrumbs match exactly with the url's) Pagination: This was added to enable content browsing Title tags: Added descriptive title tags to the major links and buttons. Basically all user content including meta tags have remained the same. Now since this company is rather successful and 90% of its clients come from Google's organic results I am obliged to take all necessary precautions. People tell me I need a migration plan to prevent the site being hurt in Google, but I have never worked using such a plan... ...So, based on the above. Would you consider a migration plan necessary and what precautions/actions would you recommend to prevent us being put down in our SERP positions? Many thanks in advance for your answers.

    Read the article

  • How do I choose the scaling factor of a 3D game world?

    - by concept3d
    I am making a 3D tank game prototype with some physics simulation, am using C++. One of the decisions I need to make is the scale of the game world in relation to reality. For example, I could consider 1 in-game unit of measurement to correspond to 1 meter in reality. This feels intuitive, but I feel like I might be missing something. I can think of the following as potential problems: 3D modelling program compatibility. (?) Numerical accuracy. (Does this matter?) Especially at large scales, how games like Battlefield have huge maps: How don't they lose numerical accuracy if they use 1:1 mapping with real world scale, since floating point representation tend to lose more precision with larger numbers (e.g. with ray casting, physics simulation)? Gameplay. I don't want the movement of units to feel slow or fast while using almost real world values like -9.8 m/s^2 for gravity. (This might be subjective.) Is it ok to scale up/down imported assets or it's best fit with a world with its original scale? Rendering performance. Are large meshes with the same vertex count slower to render? I'm wondering if I should split this into multiple questions...

    Read the article

  • Whats after the iPAD?? check out the new Augmented Reality Glasses

    - by Stephen Slade
    Everyone loves their new iPad! The rich features, portability, plethora of apps and ease of use make this the new clipboard on the factory floor or electronic notebook for meetings. But how many business people walk into an hours meeting and really start typing notes on their PC?..yes some do, but for the general business public there are new technologies coming down the road. The iPad is the latest holla-hoop; and the next gen device I feel is the Augmented Reality Glasses. Your glasses will be your screen, have an earpiece, be wireless enabled, your smart watch be your electronics and maybe your belt can be an extended battery pack.  I'm anxious to test one of these. The TELEGRAPH writes:  "Android software is believed to power the gadget, enabling similar features to its smartphone and tablets. A 3G /4G data connection, motion sensors and GPS navigation are believed to be included in the device's capabilities. The augmented-reality glasses are the culmination of a two-year initiative called Project Glass, developed in the clandestine Google X lab, ..in Mountain View, ...The New York Times suggested they could cost between $250-$500 " http://www.telegraph.co.uk/technology/news/9187547/Google-unveils-augmented-reality-glasses.html

    Read the article

  • links for 2010-05-11

    - by Bob Rhubart
    Fat Bloke: Oracle VM VirtualBox 3.1.8 released! "Supporting new platforms such as Ubuntu 10.04 (Lucid Lynx) and delivering a host of bugfixes, VirtualBox 3.1.8 is available now from the usual places, " says the Fat Bloke. (tags: oracle otn virtualization linux) Anthony Shorten: What is the Oracle Utilities Application Framework? "The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way," according to Anthony Shorten (tags: oracle otn framework java standards) Audio podcast: Oracle WebLogic Suite Virtualization Option (Application Grid) "Steve Harris, Senior Vice President of application server and Java Platform, Enterprise Edition development, talks about running Oracle WebLogic Server on Oracle JRockit Virtual Edition. Listen here to learn how you can run faster and more efficiently without a guest operating system on Oracle VM." (tags: oracle otn grid wweblogic podcast virtualization) MySQL Community Blog: MySQL track with free event at Kaleidoscope 2010 "The even greater news," writes Giuseppe Maxia, "is that, in addition to the general schedule, there are SUNDOWN SESSIONS!" (tags: java sun oracle mysql) @SOAtoday: Will Cloudsourcing Change the Face of Consulting? "Will we all be working remotely to deliver our client projects going forward? Maybe someday, but not anytime soon." -- Oracle ACE Director Jordan Braunstein (tags: oracle otn oracleace cloudcomputing entarch) @SOAtoday: Are we Paid to Say No? "Software architects take their governance initiatives seriously, and I can say with a high level of confidence that most of these denials are highly justified. But, have we architects lost our entrepreneurial spirit, with governance as our defense? Are we over-scrutinizing new ideas and slowing down pilots of innovation because they don’t align with our governance policies and enterprise frameworks?" -- Oracle ACE Director Jordan Braunstein (tags: architect entarch oracle otn soa)

    Read the article

  • SAP Applications Certified for Oracle SPARC SuperCluster

    - by Javier Puerta
    SAP applications are now certified for use with the Oracle SPARC SuperCluster T4-4, a general-purpose engineered system designed for maximum simplicity, efficiency, reliability, and performance. "The Oracle SPARC SuperCluster is an ideal platform for consolidating SAP applications and infrastructure," says Ganesh Ramamurthy, vice president of engineering, Oracle. "Because the SPARC SuperCluster is a pre-integrated engineered system, it enables data center managers to dramatically reduce their time to production for SAP applications to a fraction of what a build-it-yourself approach requires and radically cuts operating and maintenance costs." SAP infrastructure and applications based on the SAP NetWeaver technology platform 6.4 and above and certified with Oracle Database 11g Release 2, such as the SAP ERP application and SAP NetWeaver Business Warehouse, can now be deployed using the SPARC SuperCluster T4 4. The SPARC SuperCluster T4-4 provides an optimized platform for SAP environments that can reduce configuration times by up to 75 percent, reduce operating costs up to 50 percent, can improve query performance by up to 10x, and can improve daily data loading up to 4x. The Oracle SPARC SuperCluster T4-4 is the world's fastest general purpose engineered system, delivering high performance, availability, scalability, and security to support and consolidate multi-tier enterprise applications with Web, database, and application components. The SPARC SuperCluster T4-4 combines Oracle's SPARC T4-4 servers running Oracle Solaris 11 with the database optimization of Oracle Exadata, the accelerated processing of Oracle Exalogic Elastic Cloud software, and the high throughput and availability of Oracle's Sun ZFS Storage Appliance all on a high-speed InfiniBand backplane. Part of Oracle's engineered systems family, the SPARC SuperCluster T4-4 demonstrates Oracle's unique ability to innovate and optimize at every layer of technology to simplify data center operations, drive down costs, and accelerate business innovation. For more details, refer to Our press release Datasheet: Oracle's SPARC SuperCluster T4-4 (PDF) Datasheet: Oracle's SPARC SuperCluster Now Supported by SAP (PDF) Video Podcast: Oracle's SPARC SuperCluster (MP4)

    Read the article

  • Microsoft Researchers shows off best Touch Screen ever made. Better than Apple touch screens!

    - by Gopinath
    All the touch devices we have in market today like iPads, iPhones, Samsung tablets and phones, etc.  have a very small issue – 100 milliseconds of lag. The lag is the amount of time a touch device takes to respond after you touch the device. The 100 milliseconds of lag may not be an issue when you are tapping and swapping the interface elements on a device, but they are apparent when you wing your finger around the screen faster. For example if you use any painting app, the lag is very obvious and screen responds slowly than an artist can paint with his finger. Researchers at Microsoft labs came out with a prototype of touch device that drastically cuts down the 100 milliseconds of lag time to just 1 millisecond. That’s 100 times faster than today’s touch screen devices. Check out the video embedded below for a demo of new touch screen. Over at TechCrunch, Chris Velazco says: The difference is staggering, especially when Dietz trots out the slow-motion footage. With the delay between touch input and screen response slashed by orders of magnitude, a device that sports the sort of super-low-latency Dietz envisions has the potential to feel far more (for lack of a better term) natural than its brethren. There’s zero delay when you slide a checker across a board, for example, and bringing that sort of instantaneous feedback to the many screens in our lives could help to bridge the gap between operating a bit of software and the feeling of interacting with objects.   It will be great boost to Microsoft’s tablet strategy if they succeed in bringing this research into mass market and allow it’s partners to use the technology on Windows 8 tablets.

    Read the article

  • Using the latest (stable release) of Oracle Developer Tools for Visual Studio 11.1.0.7.20.

    - by mbcrump
    +  = Simple and safe Data connections.   This guide is for someone wanting to use the latest ODP.NET quickly without reading the official documentation. This guide will get you up and running in about 15 minutes. I have reviewed my referral link to my simple Setting up ODP.net with Win7 x64 and noticed most people were searching for one of the following terms: “how to use odp.net with vs” “setup connection odp.net” “query db using odp and vs” While my article provided links and a sample tnsnames.ora file, it really didn’t tell you how to use it. I’m hoping that this brief tutorial will help. So before we get started, you will need the following: Download the following: www.oracle.com/technology/software/tech/dotnet/utilsoft.html from oracle and install it. It is the first one on the page. Visual Studio 2008 or 2010. It should be noted that The System.Data.OracleClient namespace is the OLD .NET Framework Data Provider for Oracle. It should not be used anymore as it has been depreciated. The latest version which is what we are using is Oracle.DataAccess.Client. First things first, Add a reference to the Oracle.DataAccess.Client after you install ODP.NET   Copy and paste the following C# code into your project and replace the relevant info including the query string and you should be able to return data. I have commented several lines of code to assist in understanding what it is doing.   Lambda Expression. using System; using System.Data; using Oracle.DataAccess.Client;   namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {           try         {             //Setup DataSource             string oradb = "Data Source=(DESCRIPTION ="                                    + "(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521)))"                                    + "(CONNECT_DATA = (SERVICE_NAME = SERVICENAME))) ;"                                    + "Persist Security Info=True;User ID=USER;Password=PASSWORD;";                        //Open Connection to Oracle - this could be moved outside the try.             OracleConnection conn = new OracleConnection(oradb);             conn.Open();               //Create cmd and use parameters to prevent SQL injection attacks.             OracleCommand cmd = new OracleCommand();             cmd.Connection = conn;               cmd.CommandText = "select username from table where username = :username";               OracleParameter p1 = new OracleParameter("username", OracleDbType.Varchar2);             p1.Value = username;             cmd.Parameters.Add(p1);               cmd.CommandType = CommandType.Text;               OracleDataReader dr = cmd.ExecuteReader();             dr.Read();               //Contains the value of the datarow             Console.WriteLine(dr["username"].ToString());               //Disposes of objects.             dr.Dispose();             cmd.Dispose();             conn.Dispose();         }           catch (OracleException ex) // Catches only Oracle errors         {             switch (ex.Number)             {                 case 1:                     Console.WriteLine("Error attempting to insert duplicate data.");                     break;                 case 12545:                     Console.WriteLine("The database is unavailable.");                     break;                 default:                     Console.WriteLine(ex.Message.ToString());                     break;             }         }           catch (Exception ex) // Catches any error not previously caught         {                   Console.WriteLine("Unidentified Error: " + ex.Message.ToString());              }         }       }           } At this point, you should have a working Program that returns data from an oracle database. If you are still having trouble then drop me a line and I will be happy to assist. As of this writing, oracle has announced the latest beta release of ODP.NET 11.2.0.1.1 Beta.  This release includes .NET Framework 4 and .NET Framework 4 Client Profile support. You may want to hold off on this version for a while as its BETA, and I wouldn’t want any production code using any BETA software.

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Workflow versioning

    - by Nitra
    I believe I have a fundamental misunderstanding when it comes to workflow engines which I would appreciate if you could help me sort out. I'm not sure if my misunderstanding is specific to the workflow engine I'm using, or if it's a general misunderstanding. I happen to use Windows Workflow Foundation (WWF). TLDR-version WWF allows you to implement business processes in long-running workflows (think months or even years). When started, the workflows can't be changed. But what business process can't change at any time? And if a business process changes, wouldn't you want your software to reflect this change for already started 'instances' of the business process? What am I missing? Background In WWF you define a workflow by combining a set of activites. There are different types of activities - some of them are for flow control, such as the IfElseActivity and the WhileActivty while others allows you to perform actual tasks, such as the CodeActivity wich allows you to run .NET code and the InvokeWebServiceActivity which allows you to call web services. The activites are combined to a workflow using a visual designer. You pretty much drag-and-drop activities from a toolbox to a designer area and connect the activites to each other. The workflow and activities have input paramters, output parameters and variables. We have a single workflow which sometimes runs in a matter of a few days, but it may run for 5-6 months. WWF takes care of persisting the workflow state (what activity are we currently executing, what are the variable values and so on). So far I think WWF makes sense. Some people will prefer to implement a software representation of a business process using a visual designer over writing all of it in code. So what's the issue then? What I don't really get is the following: WWF is designed to take care of long-running workflows. But at the same time, WWF has no built-in functionality which allows you to modify the running workflows. So if you model a business process using a workflow and run that for 6 months, you better hope that the business process does not change. Because if it do, you'll have to have multiple versions of the workflow executing at the same time. This seems like a fundamental design mistake to me, but at the same time it seems more likely that I've misunderstood something. For us, this has had some real-world effects: We release new versions every month, but some workflows may run for a year. This means that we have several versions of the workflow running in parallell, in other words several versions of the business logics. This is the same as having many differnt versions of your code running in production in the same system at the same time, which becomes a bit hard to understand for users. (depending on on whether they clicked a 'Start' button 9 or 10 months ago, the software will behave differently) Our workflow refers to different types of entities and since WWF now has persisted and serialized these we can't really refactor the entities since then existing workflows can't be resumed (deserialization will fail We've received some suggestions on how to handle this When we create a new version of the workflow, cancel all running workflows and create new ones. But in our workflows there's a lot of manual work involved and if we start from scratch a lot of people has to re-do their work. Track what has been done in the workflow and when you create a new one skip activites which have already been executed. I feel that this alternative may work for simple workflows, but it becomes hairy to automatically figure out what activities to skip if there's major refactoring done to a workflow. When we create a new version of the workflow, upgrade old versions using the new WWF 4.5 functionality for upgrading workflows. But then we would have to skip using the visual designer and write code to inject activities in the right places in the workflow. According to MSDN, this upgrade functionality is only intended for minor bug fixes and not larger changes. What am I missing?

    Read the article

  • Migrating a blog from Orchard 0.5 to 0.9

    - by Bertrand Le Roy
    My personal blog still runs on Orchard 0.5, because the theme that I used to build it is not yet available for more recent versions, but it is still very important for me to know that I can migrate all my content and comments to a new version at any time. Fortunately, Nick Mayne has been consistently shipping a BlogML module a few days after each of the Orchard versions shipped. Because the module gallery for each version is behind a different URL and is kept alive even after a new one shipped, it is very easy to install the module for both versions. Step 0: Setting up the migration environment In order to do the migration, I made a local copy of the production site on my laptop (data included: I'm using SQL CE) and I also created a new local site with a fresh install of Orchard 0.9. Step 1: Enable the gallery feature on both versions From the admin UI, go to Features and locate the Gallery feature under "Packaging". Enable it. You may now click on "Browse Gallery" on the 0.5 instance and "Modules" under "Gallery" for 0.9: Step 2: Install the BlogML module on both versions From the gallery page, locate the BlogML module and install it. Do it on both versions. Then go to Features and enable BlogML under "Content Publishing". Do it on both versions. Step 3: Export from the 0.5 version Click on "Manage Blog" then on "Export using BlogML" from the 0.5 version. The module then informs you of the path of the saved file: Step 4: Import into the 0.9 version From the 0.9 version, click "Import under "Blogs". Click the button to browse to the file that you just saved from 0.5. Then click "Upload file and Import" Step 5: Copy the 0.5 media folder into 0.9 Copy the contents of the 0.5 version's media folder into the media folder of the 0.9 version. Once that is done, you can delete the "Default/Blog Exports" subfolder. Step 6: Configure the target blog Click "Manage Blog", then "Blog Properties" and restore any properties you had on the source blog. For me, it was the title and URL as well as to set the blog as the home page and show it on the main menu: Step 7: Republish the new site to the production server Once this is done and everything works locally, you are ready to publish to the production site. I use FTP. Note: this should work just as well for any couple of versions for which the BlogML module exists, and not just for 0.5 and 0.9.

    Read the article

< Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >