Search Results

Search found 45316 results on 1813 pages for 'class literals'.

Page 1721/1813 | < Previous Page | 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728  | Next Page >

  • Approaches to create a nested tree structure of NSDictionaries?

    - by d11wtq
    I'm parsing some input which produces a tree structure containing NSDictionary instances on the branches and NSString instance at the nodes. After parsing, the whole structure should be immutable. I feel like I'm jumping through hoops to create the structure and then make sure it's immutable when it's returned from my method. We can probably all relate to the input I'm parsing, since it's a query string from a URL. In a string like this: a=foo&b=bar&a=zip We expect a structure like this: NSDictionary { "a" => NSDictionary { 0 => "foo", 1 => "zip" }, "b" => "bar" } I'm keeping it just two-dimensional in this example for brevity, though in the real-world we sometimes see var[key1][key2]=value&var[key1][key3]=value2 type structures. The code hasn't evolved that far just yet. Currently I do this: - (NSDictionary *)parseQuery:(NSString *)queryString { NSMutableDictionary *params = [NSMutableDictionary dictionary]; NSArray *pairs = [queryString componentsSeparatedByString:@"&"]; for (NSString *pair in pairs) { NSRange eqRange = [pair rangeOfString:@"="]; NSString *key; id value; // If the parameter is a key without a specified value if (eqRange.location == NSNotFound) { key = [pair stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; value = @""; } else { // Else determine both key and value key = [[pair substringToIndex:eqRange.location] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; if ([pair length] > eqRange.location + 1) { value = [[pair substringFromIndex:eqRange.location + 1] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; } else { value = @""; } } // Parameter already exists, it must be a dictionary if (nil != [params objectForKey:key]) { id existingValue = [params objectForKey:key]; if (![existingValue isKindOfClass:[NSDictionary class]]) { value = [NSDictionary dictionaryWithObjectsAndKeys:existingValue, [NSNumber numberWithInt:0], value, [NSNumber numberWithInt:1], nil]; } else { // FIXME: There must be a more elegant way to build a nested dictionary where the end result is immutable? NSMutableDictionary *newValue = [NSMutableDictionary dictionaryWithDictionary:existingValue]; [newValue setObject:value forKey:[NSNumber numberWithInt:[newValue count]]]; value = [NSDictionary dictionaryWithDictionary:newValue]; } } [params setObject:value forKey:key]; } return [NSDictionary dictionaryWithDictionary:params]; } If you look at the bit where I've added FIXME it feels awfully clumsy, pulling out the existing dictionary, creating an immutable version of it, adding the new value, then creating an immutable dictionary from that to set back in place. Expensive and unnecessary? I'm not sure if there are any Cocoa-specific design patterns I can follow here?

    Read the article

  • Can Microsoft Build Appliances?

    - by andrewbrust
    Billy Hollis, my Visual Studio Live! colleague and fellow Microsoft Regional Director said recently, and I am paraphrasing, that the computing world, especially on the consumer side, has shifted from one of building hardware and software that makes things possible to do, to building products and technologies that make things easy to do.  Billy crystalized things perfectly, as he often does. In this new world of “easy to do,” Apple has done very well and Microsoft has struggled.  In the old world, customers wanted a Swiss Army Knife, with the most gimmicks and gadgets possible.  In the new world, people want elegantly cutlery.  They may want cake cutters and utility knives too, but they don’t want one device that works for all three tasks.  People don’t want tools, they want utensils.  People don’t want machines.  They want appliances. Microsoft Appliances: They Do Exist Microsoft has built a few appliance-like devices.  I would say XBox 360 is an appliance,  It’s versatile, mind you, but it’s the kind of thing you plug in, turn on and use, as opposed to set-up, tune, and open up to upgrade the internals.  Windows Phone 7 is an appliance too.  It’s a true smartphone, unlike Windows Mobile which was a handheld computer with a radio stack.  Zune is an appliance too, and a nice one.  It hasn’t attained much traction in the market, but that’s probably because the seminal consumer computing appliance -- the iPod – got there so much more quickly. In the embedded world, Mediaroom, Microsoft’s set-top product for the cable industry (used by AT&T U-Verse and others) is an appliance.  So is Microsoft’s Sync technology, used in Ford automobiles.  Even on the enterprise side, Microsoft has an appliance: SQL Server Parallel Data Warehouse Edition (PDW) combines Microsoft software with select OEMs’ server, networking and storage hardware.  You buy the appliance units from the OEMs, plug them in, connect them and go. I would even say that Bing is an appliance.  Not in the hardware sense, mind you.  But from the software perspective, it’s a single-purpose product that you visit or run, use and then move on.  You don’t have to install it (except the iOS and Android native apps where it’s pretty straightforward), you don’t have to customize it, you don’t have to program it.  Basically, you just use it. Microsoft Appliances that Should Exist But Microsoft builds a bunch of things that are not appliances.  Media Center is not an appliance, and it most certainly should be.  Instead, it’s an app that runs on Windows 7.  It runs full-screen and you can use this configuration to conceal the fact that Windows is under it, but eventually something will cause you to abandon that masquerade (like Patch Tuesday). The next version of Windows Home Server won’t, in my opinion, be an appliance either.  Now that the Drive Extender technology is gone, and users can’t just add and remove drives into and from a single storage pool, the product is much more like a IT server and less like an appliance-premised one.  Much has been written about this decision by Microsoft.  I’ll just sum it up in one word: pity. Microsoft doesn’t have anything remotely appliance-like in the tablet category, either.  Until it does, it likely won’t have much market share in that space either.  And of course, the bulk of Microsoft’s product catalog on the business side is geared to enterprise machines and not personal appliances. Appliance DNA: They Gotta Have It. The consumerization of IT is real, because businesspeople are consumers too.  They appreciate the fit and finish of appliances at home, and they increasingly feel entitled to have it at work too.  Secure and reliable push email in a smartphone is necessary, but it isn’t enough.  People want great apps and a pleasurable user experience too.  The full Microsoft Office product is needed at work, but a PC with a keyboard and mouse, or maybe a touch screen that uses a stylus (or requires really small fingers), to run Office isn’t enough either.  People want a flawless touch experience available for the times they want to read and take quick notes.  Until Microsoft realizes this fully and internalizes it, it will suffer defeats in the consumer market and even setbacks in the business market.  Think about how slow the Office upgrade cycle is…now imagine if the next version of Office had a first-class alternate touch UI and consider the possible acceleration in adoption rates. Can Microsoft make the appliance switch?  Can the appliance mentality become pervasive at the company?  Can Microsoft hasten its release cycles dramatically and shed the “some assembly required” paradigm upon which many of its products are based?  Let’s face it, the chances that Microsoft won’t make this transition are significant. But there are also encouraging signs, and they should not be ignored.  The appliances we have already discussed, especially Xbox, Zune and Windows Phone 7, are the most obvious in this regard.  The fact that SQL Server has an appliance SKU now is a more subtle but perhaps also more significant outcome, because that product sits so smack in the middle of Microsoft’s enterprise stack.  Bing is encouraging too, especially given its integrated travel, maps and augmented reality capabilities.  As Bing gains market share, Microsoft has tangible proof that it can transform and win, even when everyone outside the company, and many within it, would bet otherwise. That Great Big Appliance in the Sky Perhaps the most promising (and evolving) proof points toward the appliance mentality, though, are Microsoft’s cloud offerings -- Azure and BPOS/Office 365.  While the cloud does not represent a physical appliance (quite the opposite in fact) its ability to make acquisition, deployment and use of technology simple for the user is absolutely an embodiment of the appliance mentality and spirit.  Azure is primarily a platform as a service offering; it doesn’t just provide infrastructure.  SQL Azure does likewise for databases.  And Office 365 does likewise for SharePoint, Exchange and Lync. You don’t administer, tune and manage servers; instead, you create databases or site collections or mailboxes and start using them. Upgrades come automatically, and it seems like releases will come more frequently.  Fault tolerance and content distribution is just there.  No muss.  No fuss.  You use these services; you don’t have to set them up and think about them.  That’s how appliances work.  To me, these signs point out that Microsoft has the full capability of transforming itself.  But there’s a lot of work ahead.  Microsoft may say they’re “all in” on the cloud, but the majority of the company is still oriented around its old products and models.  There needs to be a wholesale cultural transformation in Redmond.  It can happen, but product management, program management, the field and executive ranks must unify in the effort. So must partners, and even customers.  New leaders must rise up and Microsoft must be able to see itself as a winner.  If Microsoft does this, it could lock-in decades of new success, and be a standard business school case study for doing so.  If not, the company will have missed an opportunity, and may see its undoing.

    Read the article

  • Trying to send email in Java using gmail always results in username and password not accepted.

    - by Thaeos
    When I call the send method (after setting studentAddress), I get this: javax.mail.AuthenticationFailedException: 535-5.7.1 Username and Password not accepted. Learn more at 535 5.7.1 http://mail.google.com/support/bin/answer.py?answer=14257 y15sm906936wfd.10 I'm pretty sure the code is correct, and 100% positive that the username and password details I'm entering are correct. So is this something wrong with gmail or what? This is my code: import java.util.*; import javax.mail.*; import javax.mail.internet.*; public class SendEmail { private String host = "smtp.gmail.com"; private String emailLogin = "[email protected]"; private String pass = "xxx"; private String studentAddress; private String to; private Properties props = System.getProperties(); public SendEmail() { props.put("mail.smtps.auth", "true"); props.put("mail.smtps.starttls.enable", "true"); props.put("mail.smtp.host", host); props.put("mail.smtp.user", emailLogin); props.put("mail.smtp.password", pass); props.put("mail.smtp.port", "587"); to = "[email protected]"; } public void setStudentAddress(String newAddress) { studentAddress = newAddress; } public void send() { Session session = Session.getDefaultInstance(props, null); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(emailLogin)); InternetAddress[] studentAddressList = {new InternetAddress(studentAddress)}; message.setReplyTo(studentAddressList); message.setRecipient(Message.RecipientType.TO, new InternetAddress(to)); message.setSubject("Test Email"); message.setText("This is a test email!"); Transport transport = session.getTransport("smtps"); transport.connect(host, emailLogin, pass); transport.sendMessage(message, message.getAllRecipients()); transport.close(); } catch (MessagingException me) { System.out.println("There has been an email error!"); me.printStackTrace(); } } } Any ideas...

    Read the article

  • jquery data selector

    - by Tauren
    I need to select elements based on values stored in an element's .data() object. At a minimum, I'd like to select top-level data properties using selectors, perhaps like this: $('a').data("category","music"); $('a:data(category=music)'); Or perhaps the selector would be in regular attribute selector format: $('a[category=music]'); Or in attribute format, but with a specifier to indicate it is in .data(): $('a[:category=music]'); I've found James Padolsey's implementation to look simple, yet good. The selector formats above mirror methods shown on that page. There is also this Sizzle patch. For some reason, I recall reading a while back that jQuery 1.4 would include support for selectors on values in the jquery .data() object. However, now that I'm looking for it, I can't find it. Maybe it was just a feature request that I saw. Is there support for this and I'm just not seeing it? Ideally, I'd like to support sub-properties in data() using dot notation. Like this: $('a').data("user",{name: {first:"Tom",last:"Smith"},username: "tomsmith"}); $('a[:user.name.first=Tom]'); I also would like to support multiple data selectors, where only elements with ALL specified data selectors are found. The regular jquery multiple selector does an OR operation. For instance, $('a.big, a.small') selects a tags with either class big or small). I'm looking for an AND, perhaps like this: $('a').data("artist",{id: 3281, name: "Madonna"}); $('a').data("category","music"); $('a[:category=music && :artist.name=Madonna]'); Lastly, it would be great if comparison operators and regex features were available on data selectors. So $(a[:artist.id>5000]) would be possible. I realize I could probably do much of this using filter(), but it would be nice to have a simple selector format. What solutions are available to do this? Is Jame's Padolsey's the best solution at this time? My concern is primarily in regards to performance, but also in the extra features like sub-property dot-notation and multiple data selectors. Are there other implementations that support these things or are better in some way?

    Read the article

  • Complex error handling

    - by Caspin
    I've got a particularly ornery piece of network code. I'm using asio but that really doesn't matter for this question. I assume there is no way to unbind a socket other than closing it. The problem is that open(), bind(), and listen() can all throw a system_error. So I handled the code with a simple try/catch. The code as written in broken. using namespace boost::asio; class Thing { public: ip::tcp::endpoint m_address; ip::tcp::acceptor m_acceptor; /// connect should handle all of its exceptions internally. bool connect() { try { m_acceptor.open( m_address.protocol() ); m_acceptor.set_option( tcp::acceptor::reuse_address(true) ); m_acceptor.bind( m_address ); m_acceptor.listen(); m_acceptor.async_accept( /*stuff*/ ); } catch( const boost::system::system_error& error ) { assert(acceptor.is_open()); m_acceptor.close(); return false; } return true; } /// don't call disconnect unless connect previously succeeded. void disconnect() { // other stuff needed to disconnect is ommited m_acceptor.close(); } }; The error is if the socket fails to connect it will try to close it in the catch block and throw another system_error about closing an acceptor that has never been opened. One solution is to add an if( acceptor.is_open() ) in the catch block but that tastes wrong. Kinda like mixing C-style error checking with c++ exceptions. If I where to go that route, I may as well use the non-throwing version of open(). boost::system::error_code error; acceptor.open( address.protocol, error ); if( ! error ) { try { acceptor.set_option( tcp::acceptor::reuse_address(true) ); acceptor.bind( address ); acceptor.listen(); acceptor.async_accept( /*stuff*/ ); } catch( const boost::system::system_error& error ) { assert(acceptor.is_open()); acceptor.close(); return false; } } return !error; Is there an elegant way to handle these possible exceptions using RAII and try/catch blocks? Am I just wrong headed in trying to avoid if( error condition ) style error handling when using exceptions?

    Read the article

  • Merging two XML files into one XML file using Java

    - by dmurali
    I am stuck with how to proceed with combining two different XML files(which has the same structure). When I was doing some research on it, people say that XML parsers like DOM or StAX will have to be used. But cant I do it with the regular IOStream? I am currently trying to do with the help of IOStream but this is not solving my purpose, its being more complex. For example, What I have tried is; public class GUI { public static void main(String[] args) throws Exception { // Creates file to write to Writer output = null; output = new BufferedWriter(new FileWriter("C:\\merged.xml")); String newline = System.getProperty("line.separator"); output.write(""); // Read in xml file 1 FileInputStream in = new FileInputStream("C:\\1.xml"); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String strLine; while ((strLine = br.readLine()) != null) { if (strLine.contains("<MemoryDump>")){ strLine = strLine.replace("<MemoryDump>", "xmlns:xsi"); } if (strLine.contains("</MemoryDump>")){ strLine = strLine.replace("</MemoryDump>", "xmlns:xsd"); } output.write(newline); output.write(strLine); System.out.println(strLine); } // Read in xml file 2 FileInputStream in = new FileInputStream("C:\\2.xml"); BufferedReader br1 = new BufferedReader(new InputStreamReader(in)); String strLine1; while ((strLine1 = br1.readLine()) != null) { if (strLine1.contains("<MemoryDump>")){ strLine1 = strLine1.replace("<MemoryDump>", ""); } if (strLine1.contains("</MemoryDump>")){ strLine1 = strLine1.replace("</MemoryDump>", ""); } output.write(newline); output.write(strLine1); I request you to kindly let me know how do I proceed with merging two XML files by adding additional content as well. It would be great if you could provide me some example links as well..! Thank You in Advance..! System.out.println(strLine1); } }

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • WebBrowser Control in ATL window. How to free up memory on window unload? I'm stuck.

    - by Martin
    Hello there. I have a Win32 C++ Application. There is the _tWinMain(...) Method with GetMessage(...) in a while loop at the end. Before GetMessage(...) I create the main window with HWND m_MainHwnd = CreateWindowExW(WS_EX_TOOLWINDOW | WS_EX_LAYERED, CAxWindow::GetWndClassName(), _TEXT("http://www.-website-.com"), WS_POPUP, 0, 0, 1024, 768, NULL, NULL, m_Instance, NULL); ShowWindow(m_MainHwnd) If I do not create the main window, my application needs about 150K in memory. But with creating the main window with the WebBrowser Control inside, the memory usage increases to 8500K. But, I want to dynamically unload the main window. My _tWinMain(...) keeps running! Im unloading with DestroyWindow(m_MainHwnd) But the WebBrowser control won't unload and free up it's memory used! Application memory used is still 8500K! I can also get the WebBrowser Instance or with some additional code the WebBrowser HWND IWebBrowser2* m_pWebBrowser2; CAxWindow wnd = (CAxWindow)m_MainHwnd; HRESULT hRet = wnd.QueryControl(IID_IWebBrowser2, (void**)&m_pWebBrowser2); So I tried to free up the memory used by main window and WebBrowser control with (let's say it's experimental): if(m_pWebBrowser2) m_pWebBrowser2->Release(); DestroyWindow(m_hwndWebBrowser); //<-- just analogous OleUninitialize(); No success at all. I also created a wrapper class which creates the main window. I created a pointer and freed it up with delete: Wrapper* wrapper = new Wrapper(); //wrapper creates main window inside and shows it //...do some stuff delete(wrapper); No success. Still 8500K. So please, how can I get rid of the main window and it's WebBrowser control and free up the memory, returning to about 150K. Later I will recreate the window. It's a dynamically load and unload of the main window, depending on other commands. Thanks! Regards Martin

    Read the article

  • DoubleAnimation in ScaleTransform

    - by Adam S
    I'm trying, as an exhibition, to use a DoubleAnimation on the ScaleX and ScaleY properties of a ScaleTransform. I have a rectangle (144x144) which I want to make rectangular over five seconds. My XAML: <Window x:Class="ScaleTransformTest.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <Grid> <Rectangle Name="rect1" Width="144" Height="144" Fill="Aqua"> <Rectangle.RenderTransform> <ScaleTransform ScaleX="1" ScaleY="1" /> </Rectangle.RenderTransform> </Rectangle> </Grid> </Window> My C#: private void Window_Loaded(object sender, RoutedEventArgs e) { ScaleTransform scaly = new ScaleTransform(1, 1); rect1.RenderTransform = scaly; Duration mytime = new Duration(TimeSpan.FromSeconds(5)); Storyboard sb = new Storyboard(); DoubleAnimation danim1 = new DoubleAnimation(1, 1.5, mytime); DoubleAnimation danim2 = new DoubleAnimation(1, 0.5, mytime); sb.Children.Add(danim1); sb.Children.Add(danim2); Storyboard.SetTarget(danim1, scaly); Storyboard.SetTargetProperty(danim1, new PropertyPath(ScaleTransform.ScaleXProperty)); Storyboard.SetTarget(danim2, scaly); Storyboard.SetTargetProperty(danim2, new PropertyPath(ScaleTransform.ScaleYProperty)); sb.Begin(); } Unfortunately, when I run this program, it does nothing. The rectangle stays at 144x144. If I do away with the animation, and just ScaleTransform scaly = new ScaleTransform(1.5, 0.5); rect1.RenderTransform = scaly; it will elongate it instantly, no problem. There is a problem elsewhere. Any suggestions? I have read the discussion at http://www.eggheadcafe.com/software/aspnet/29220878/how-to-animate-tofrom-an.aspx in which someone seems to have gotten a pure-XAML version working, but the code is not shown there. EDIT: At http://stackoverflow.com/questions/2131797/applying-animated-scaletransform-in-code-problem it seems someone had a very similar problem, I am fine with using his method that worked, but what the heck is that string thePath = "(0).(1)[0].(2)"; all about? What are those numbers representing?

    Read the article

  • Adding proper THEAD sections to a GridView

    - by Rick Strahl
    I’m working on some legacy code for a customer today and dealing with a page that has my favorite ‘friend’ on it: A GridView control. The ASP.NET GridView control (and also the older DataGrid control) creates some pretty messed up HTML. One of the more annoying things it does is to generate all rows including the header into the page in the <tbody> section of the document rather than in a properly separated <thead> section. Here’s is typical GridView generated HTML output: <table class="tablesorter blackborder" cellspacing="0" rules="all" border="1" id="Table1" style="border-collapse:collapse;"> <tr> <th scope="col">Name</th> <th scope="col">Company</th> <th scope="col">Entered</th><th scope="col">Balance</th> </tr> <tr> <td>Frank Hobson</td><td>Hobson Inc.</td> <td>10/20/2010 12:00:00 AM</td><td>240.00</td> </tr> ... </table> Notice that all content – both the headers and the body of the table – are generated directly under the <table> tag and there’s no explicit use of <tbody> or <thead> (or <tfooter> for that matter). When the browser renders this the document some default settings kick in and the DOM tree turns into something like this: <table> <tbody> <tr> <-- header <tr> <—detail row <tr> <—detail row </tbody> </table> Now if you’re just rendering the Grid server side and you’re applying all your styles through CssClass assignments this isn’t much of a problem. However, if you want to style your grid more generically using hierarchical CSS selectors it gets a lot more tricky to format tables that don’t properly delineate headers and body content. Also many plug-ins and other JavaScript utilities that work on tables require a properly formed table layout, and many of these simple won’t work out of the box with a GridView. For example, one of the things I wanted to do for this app is use the jQuery TableSorter plug-in which – not surprisingly – requires to work of table headers in the DOM document. Out of the box, the TableSorter plug-in doesn’t work with GridView controls, because the lack of a <thead> section to work on. Luckily with a little help of some jQuery scripting there’s a real easy fix to this problem. Basically, if we know the GridView generated table has a header in it, code like the following will move the headers from <tbody> to <thead>: <script type="text/javascript"> $(document).ready(function () { // Fix up GridView to support THEAD tags $("#gvCustomers tbody").before("<thead><tr></tr></thead>"); $("#gvCustomers thead tr").append($("#gvCustomers th")); $("#gvCustomers tbody tr:first").remove(); $("#gvCustomers").tablesorter({ sortList: [[1, 0]] }); }); </script> And voila you have a table that now works with the TableSorter plug-in. If you use GridView’s a lot you might want something a little more generic so the following does the same thing but should work more generically on any GridView/DataGrid missing its <thead> tag: function fixGridView(tableEl) {            var jTbl = $(tableEl);         if(jTbl.find("tbody>tr>th").length > 0) {         jTbl.find("tbody").before("<thead><tr></tr></thead>");         jTbl.find("thead tr").append(jTbl.find("th"));         jTbl.find("tbody tr:first").remove();     } } which you can call like this: $(document).ready(function () { fixGridView( $("#gvCustomers") ); $("#gvCustomers").tablesorter({ sortList: [[1, 0]] }); }); Server Side THEAD Rendering [updated from comments 11/21/2010] Several commenters pointed out that you can also do this on the server side by using the GridView.HeaderRow.TableSection property to force rendering with a proper table header. I was unaware of this option actually – not exactly an easy one to discover. One issue here is that timing of this needs to happen during the databinding process so you need to use an event handler: this.gvCustomers.DataBound += (object o, EventArgs ev) => { gvCustomers.HeaderRow.TableSection = TableRowSection.TableHeader; }; this.gvCustomers.DataSource = custList; this.gvCustomers.DataBind(); You can apply the same logic for the FooterRow. It’s beyond me why this rendering mode isn’t the default for a GridView – why would you ever want to have a table that doesn’t use a THEAD section??? But I disgress :-) I don’t use GridViews much anymore – opting for more flexible approaches using ListViews or even plain code based views or other custom displays that allow more control over layout, but I still see a lot of old code that does use them old clunkers including my own :) (gulp) and this does make life a little bit easier especially if you’re working with any of the jQuery table related plug-ins that expect a proper table structure.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  jQuery  

    Read the article

  • How to limit JTextArea max Rows and Coloums?

    - by Billbo bug
    I am using JTextArea in JScrollPane I want to limit the maximum number of lines possible and the maximum chars in each line. I need that the string will be exactly like on the screen, each line will end with '\n' (if there another line after it) and the user will be able to insert only X lines and Y chars in each line. I tried to limit the lines but i don't know exactly how many lines do i have because of the line wrapping, The line wrapping is starting new line visualy on the screen(because of the width of the JTextArea) but in the string of the component it is really the same line with no '\n' to indicate new line. I do not have an idea how to limit the max chars in each line while typing. There are 2 stages: The typing of the string- keep that the user will not be able to type more then X lines and Y chars in each line. (even if the line wrap only visualy or the user typed '/n') Insert the string to the DB- after cliking 'OK' convert the string that every line will end with "/n" even if the user did not typed it and the line was wrapped only visualy. There are few problems if i will count the chars in the line and insert '/n' in the end of the line, thats why i decided to do it in two stages. In the first stage ehile the user is typing i would rather only limit it visualy and force line wrpping or something similar. Only in the second stage when i save string i will add the '/n' even if the user did not typed it in the end of the lines! Does anyone have an idea? I know that i will have to use DocumentFilter OR StyledDocument. Here is sample code that limit only the lines to 3:(but not the chars in row to 19) private JTextArea textArea ; textArea = new JTextArea(3,19); textArea .setLineWrap(true); textArea .setDocument(new LimitedStyledDocument(3)); JScrollPane scrollPane = new JScrollPane(textArea public class LimitedStyledDocument extends DefaultStyledDocument /** Field maxCharacters */ int maxLines; public LimitedStyledDocument(int maxLines) { maxCharacters = maxLines; } public void insertString(int offs, String str, AttributeSet attribute) throws BadLocationException { Element root = this.getDefaultRootElement(); int lineCount = getLineCount(str); if (lineCount + root.getElementCount() <= maxLines){ super.insertString(offs, str, attribute); } else { Toolkit.getDefaultToolkit().beep(); } } /** * get Line Count * * @param str * @return the count of '\n' in the String */ private int getLineCount(String str){ String tempStr = new String(str); int index; int lineCount = 0; while (tempStr.length() > 0){ index = tempStr.indexOf("\n"); if(index != -1){ lineCount++; tempStr = tempStr.substring(index+1); } else{ break; } } return lineCount; } }

    Read the article

  • Enable Proxy Authentication for normal windows application.

    - by Lalit
    my company's internet works on proxy server with Authentication(i.e. Browser prompts with window for username/password everytime i tried to access any web page). Now i have some windows application which tries to access internet(like WebPI/Visual Studio 2008 for rss feeds), but as they are unable to popup the the authentication window, they are unable to connect with internet with error: (407) Proxy Authentication Required . Here the exception is VS2008, first time it always fails to load rss feeds on startup page, but when i click on the link, it shows authentication window and everything works fine after that. my question is: How can i configure normal windows application(through app.config/app.manifest file) accessing web to able to show the authentication window or provide default credentials. For explore this furthor, i have created one console application on VS2008 which tries to serach something on google and display the result on console. Code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net; using System.IO; namespace WebAccess.Test { class Program { static void Main(string[] args) { Console.WriteLine("Enter Serach Criteria:"); string criteria = Console.ReadLine(); string baseAddress = "http://www.google.com/search?q="; string output = ""; try { // Create the web request HttpWebRequest request = WebRequest.Create(baseAddress + criteria) as HttpWebRequest; // Get response using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { // Get the response stream StreamReader reader = new StreamReader(response.GetResponseStream()); // Console application output output = reader.ReadToEnd(); } Console.WriteLine("\nResponse : \n\n{0}", output); } catch (Exception ex) { Console.WriteLine("\nError : \n\n{0}", ex.ToString()); } } } } When running this, It gives error Enter Serach Criteria: Lalit Error : System.Net.WebException: The remote server returned an error: (407) Proxy Authen tication Required. at System.Net.HttpWebRequest.GetResponse() at WebAccess.Test.Program.Main(String[] args) in D:\LK\Docs\VS.NET\WebAccess. Test\WebAccess.Test\Program.cs:line 26 Press any key to continue . . .

    Read the article

  • Custom Annotation not showing all

    - by funatsg
    Okay, i have a number of pins on the mapkit. These pins showing different types of attractions. (E.g Parks, Farms and etc) I want to add custom images for these different types of pin. Parks have a park image and vice versa. However, when i added in, not all the images are showing successfully. For example, in parks, it should have 5 pins, but the image only came up in 2 pins, whereas other 3 is in default red pins. But if i used colours to differentiate them. For example, [pinsetPinColor:MKPinAnnotationColorGreen]; It works! Anyone knows what is the problem? Relevant codes below. Tell me if you need more. thanks! - (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>)annotation{ if ([annotation isKindOfClass:MKUserLocation.class]) { //user location view is being requested, //return nil so it uses the default which is a blue dot... return nil; } //NSLog(@"View for Annotation is called"); MKPinAnnotationView *pin=[[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:nil]; pin.userInteractionEnabled=TRUE; MapEvent* event = (MapEvent*)annotation; NSLog(@"thetype: %@", event.thetype); if ([event.thetype isEqualToString:@"adv"]) { //[pin setPinColor:MKPinAnnotationColorGreen]; pin.image = [UIImage imageNamed:@"padv.png"]; } else if ([event.thetype isEqualToString:@"muse"]){ //[pin setPinColor:MKPinAnnotationColorPurple]; pin.image = [UIImage imageNamed:@"pmuse.png"]; } else if ([event.thetype isEqualToString:@"nightlife"]){ pin.image = [UIImage imageNamed:@"pnight.png"]; } else if ([event.thetype isEqualToString:@"parks"]){ pin.image = [UIImage imageNamed:@"ppark.png"]; } else if ([event.thetype isEqualToString:@"farms"]){ pin.image = [UIImage imageNamed:@"pfarm.png"]; } else { [pin setPinColor:MKPinAnnotationColorRed]; } pin.canShowCallout = YES; pin.animatesDrop = YES; UIButton *rightButton = [UIButton buttonWithType:UIButtonTypeDetailDisclosure]; [rightButton addTarget:self action:@selector(clickAnnotation:) forControlEvents:UIControlEventTouchUpInside]; [rightButton setTitle:event.uniqueID forState:UIControlStateNormal]; pin.rightCalloutAccessoryView = rightButton; return pin; }

    Read the article

  • Why Is Faulty Behaviour In The .NET Framework Not Fixed?

    - by Alois Kraus
    Here is the scenario: You have a Windows Form Application that calls a method via Invoke or BeginInvoke which throws exceptions. Now you want to find out where the error did occur and how the method has been called. Here is the output we do get when we call Begin/EndInvoke or simply Invoke The actual code that was executed was like this:         private void cInvoke_Click(object sender, EventArgs e)         {             InvokingFunction(CallMode.Invoke);         }            [MethodImpl(MethodImplOptions.NoInlining)]         void InvokingFunction(CallMode mode)         {             switch (mode)             {                 case CallMode.Invoke:                     this.Invoke(new MethodInvoker(GenerateError));   The faulting method is called GenerateError which does throw a NotImplementedException exception and wraps it in a NotSupportedException.           [MethodImpl(MethodImplOptions.NoInlining)]         void GenerateError()         {             F1();         }           private void F1()         {             try             {                 F2();             }             catch (Exception ex)             {                 throw new NotSupportedException("Outer Exception", ex);             }         }           private void F2()         {            throw new NotImplementedException("Inner Exception");         } It is clear that the method F2 and F1 did actually throw these exceptions but we do not see them in the call stack. If we directly call the InvokingFunction and catch and print the exception we can find out very easily how we did get into this situation. We see methods F1,F2,GenerateError and InvokingFunction directly in the stack trace and we see that actually two exceptions did occur. Here is for comparison what we get from Invoke/EndInvoke System.NotImplementedException: Inner Exception     StackTrace:    at System.Windows.Forms.Control.MarshaledInvoke(Control caller, Delegate method, Object[] args, Boolean synchronous)     at System.Windows.Forms.Control.Invoke(Delegate method, Object[] args)     at WindowsFormsApplication1.AppForm.InvokingFunction(CallMode mode)     at WindowsFormsApplication1.AppForm.cInvoke_Click(Object sender, EventArgs e)     at System.Windows.Forms.Control.OnClick(EventArgs e)     at System.Windows.Forms.Button.OnClick(EventArgs e) The exception message is kept but the stack starts running from our Invoke call and not from the faulting method F2. We have therefore no clue where this exception did occur! The stack starts running at the method MarshaledInvoke because the exception is rethrown with the throw catchedException which resets the stack trace. That is bad but things are even worse because if previously lets say 5 exceptions did occur .NET will return only the first (innermost) exception. That does mean that we do not only loose the original call stack but all other exceptions and all data contained therein as well. It is a pity that MS does know about this and simply closes this issue as not important. Programmers will play a lot more around with threads than before thanks to TPL, PLINQ that do come with .NET 4. Multithreading is hyped quit a lot in the press and everybody wants to use threads. But if the .NET Framework makes it nearly impossible to track down the easiest UI multithreading issue I have a problem with that. The problem has been reported but obviously not been solved. .NET 4 Beta 2 did not have changed that dreaded GetBaseException call in MarshaledInvoke to return only the innermost exception of the complete exception stack. It is really time to fix this. WPF on the other hand does the right thing and wraps the exceptions inside a TargetInvocationException which makes much more sense. But Not everybody uses WPF for its daily work and Windows forms applications will still be used for a long time. Below is the code to repro the issues shown and how the exceptions can be rendered in a meaningful way. The default Exception.ToString implementation generates a hard to interpret stack if several nested exceptions did occur. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Threading; using System.Globalization; using System.Runtime.CompilerServices;   namespace WindowsFormsApplication1 {     public partial class AppForm : Form     {         enum CallMode         {             Direct = 0,             BeginInvoke = 1,             Invoke = 2         };           public AppForm()         {             InitializeComponent();             Thread.CurrentThread.CurrentUICulture = CultureInfo.InvariantCulture;             Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(Application_ThreadException);         }           void Application_ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e)         {             cOutput.Text = PrintException(e.Exception, 0, null).ToString();         }           private void cDirectUnhandled_Click(object sender, EventArgs e)         {             InvokingFunction(CallMode.Direct);         }           private void cDirectCall_Click(object sender, EventArgs e)         {             try             {                 InvokingFunction(CallMode.Direct);             }             catch (Exception ex)             {                 cOutput.Text = PrintException(ex, 0, null).ToString();             }         }           private void cInvoke_Click(object sender, EventArgs e)         {             InvokingFunction(CallMode.Invoke);         }           private void cBeginInvokeCall_Click(object sender, EventArgs e)         {             InvokingFunction(CallMode.BeginInvoke);         }           [MethodImpl(MethodImplOptions.NoInlining)]         void InvokingFunction(CallMode mode)         {             switch (mode)             {                 case CallMode.Direct:                     GenerateError();                     break;                 case CallMode.Invoke:                     this.Invoke(new MethodInvoker(GenerateError));                     break;                 case CallMode.BeginInvoke:                     IAsyncResult res = this.BeginInvoke(new MethodInvoker(GenerateError));                     this.EndInvoke(res);                     break;             }         }           [MethodImpl(MethodImplOptions.NoInlining)]         void GenerateError()         {             F1();         }           private void F1()         {             try             {                 F2();             }             catch (Exception ex)             {                 throw new NotSupportedException("Outer Exception", ex);             }         }           private void F2()         {            throw new NotImplementedException("Inner Exception");         }           StringBuilder PrintException(Exception ex, int identLevel, StringBuilder sb)         {             StringBuilder builtStr = sb;             if( builtStr == null )                 builtStr = new StringBuilder();               if( ex == null )                 return builtStr;                 WriteLine(builtStr, String.Format("{0}: {1}", ex.GetType().FullName, ex.Message), identLevel);             WriteLine(builtStr, String.Format("StackTrace: {0}", ShortenStack(ex.StackTrace)), identLevel + 1);             builtStr.AppendLine();               return PrintException(ex.InnerException, ++identLevel, builtStr);         }               void WriteLine(StringBuilder sb, string msg, int identLevel)         {             foreach (string trimmedLine in SplitToLines(msg)                                            .Select( (line) => line.Trim()) )             {                 for (int i = 0; i < identLevel; i++)                     sb.Append('\t');                 sb.Append(trimmedLine);                 sb.AppendLine();             }         }           string ShortenStack(string stack)         {             int nonAppFrames = 0;             // Skip stack frames not part of our app but include two foreign frames and skip the rest             // If our stack frame is encountered reset counter to 0             return SplitToLines(stack)                               .Where((line) =>                               {                                   nonAppFrames = line.Contains("WindowsFormsApplication1") ? 0 : nonAppFrames + 1;                                   return nonAppFrames < 3;                               })                              .Select((line) => line)                              .Aggregate("", (current, line) => current + line + Environment.NewLine);         }           static char[] NewLines = Environment.NewLine.ToCharArray();         string[] SplitToLines(string str)         {             return str.Split(NewLines, StringSplitOptions.RemoveEmptyEntries);         }     } }

    Read the article

  • EJB client can't find a DataSource that tests successfully in WebLogic admin console

    - by suszterpatt
    Disclaimer: I'm completely new to JEE/EJB and all that, so bear with me. I have a simple EJB that I can successfully deploy using JDeveloper 11g's integrated WebLogic server and a remote database connection (JDBC). I have a DataSource named "PGY2" defined in WebLogic, and I can test it successfully from the admin console. Here's the code of the client I'm trying to test it with (generated entirely by JDev except for the three method calls on adminManager): public class AdminManagerClient { public static void main(String [] args) { try { final Context context = getInitialContext(); AdminManager adminManager = (AdminManager)context.lookup("Uran-AdminManager#hu.elte.pgy2.BACNAAI.UranEJB.AdminManager"); adminManager.addAdmin("root", "root", "Kovács Isten"); adminManager.addStudent("BACNAAI", "matt", "B Cs", 2005); adminManager.addTeacher("SIPKABT", "patt", "S P", "numanal", "Dr."); } catch (Exception ex) { ex.printStackTrace(); } } private static Context getInitialContext() throws NamingException { Hashtable env = new Hashtable(); // WebLogic Server 10.x connection details env.put( Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory" ); env.put(Context.PROVIDER_URL, "t3://127.0.0.1:7101"); return new InitialContext( env ); } } But when I try to run this, I get the following error on the line with adminManager.addAdmin (that is, after the lookup): javax.ejb.EJBException: EJB Exception: ; nested exception is: Exception [EclipseLink-4002] (Eclipse Persistence Services - 1.0.2 (Build 20081024)): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Internal error: Cannot obtain XAConnection Creation of XAConnection for pool PGY2 failed after waitSecs:30 : java.sql.SQLException: Data Source PGY2 does not exist. Why can't the client find the data source, and how do I make it find it? EDIT: I took a closer look at WebLogic's output during deployment, and I found this. I have no idea what it means, but it may be relevant: <2010.05.20. 0:50:43 CEST> <Error> <Deployer> <BEA-149231> <Unable to set the activation state to true for the application 'PGY2'. weblogic.application.ModuleException: at weblogic.jdbc.module.JDBCModule.activate(JDBCModule.java:349) at weblogic.application.internal.flow.ModuleListenerInvoker.activate(ModuleListenerInvoker.java:107) at weblogic.application.internal.flow.DeploymentCallbackFlow$2.next(DeploymentCallbackFlow.java:411) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.DeploymentCallbackFlow.activate(DeploymentCallbackFlow.java:74) Truncated. see log file for complete stacktrace weblogic.common.ResourceException: is already bound at weblogic.jdbc.common.internal.RmiDataSource.start(RmiDataSource.java:387) at weblogic.jdbc.common.internal.DataSourceManager.createAndStartDataSource(DataSourceManager.java:136) at weblogic.jdbc.common.internal.DataSourceManager.createAndStartDataSource(DataSourceManager.java:97) at weblogic.jdbc.module.JDBCModule.activate(JDBCModule.java:346) at weblogic.application.internal.flow.ModuleListenerInvoker.activate(ModuleListenerInvoker.java:107) Truncated. see log file for complete stacktrace >

    Read the article

  • Five Ways Enterprise 2.0 Can Transform Your Business - Q&A from the Webcast

    - by [email protected]
    A few weeks ago, Vince Casarez and I presented with KMWorld on the Five Ways Enterprise 2.0 Can Transform Your Business. It was an enjoyable, interactive webcast in which Vince and I discussed the ways Enterprise 2.0 can transform your business and more importantly, highlighted key customer examples of how to do so. If you missed the webcast, you can catch a replay here. We had a lot of audience participation in some of the polls we conducted and in the Q&A session. We weren't able to address all of the questions during the broadcast, so we attempted to answer them here: Q: Which area within your firm focuses on Web 2.0? Meaning, do you find new departments developing just to manage the web 2.0 (Twitter, Facebook, etc.) user experience or are you structuring current departments? A: There are three distinct efforts within Oracle. The first is around delivery of these Web 2.0 services for enterprise deployments. This is the focus of the WebCenter team. The second effort is injecting these Web 2.0 services into use cases that drive the different enterprise applications. This effort is focused on how to manage these external services and bring them into a cohesive flow for marketing programs, customer care, and purchasing. The third effort is how we consume these services internally to enhance Oracle's business delivery. It leverages the technologies and use cases of the first two but also pushes the envelope with regards to future directions of these other two areas. Q: In a business, Web 2.0 is mostly like action logs. How can we leverage the official process practice versus the logs of a recent action? Example: a system configuration modified last night on a call out versus the official practice that everybody would use in the morning.A: The key thing to remember is that most Web 2.0 actions / activity streams today are based on collaboration and communication type actions. At least with public social sites like Facebook and Twitter. What we're delivering as part of the WebCenter Suite are not just these types of activities but also enterprise application activities. These enterprise application activities come from different application modules: purchasing, HR, order entry, sales opportunity, etc. The actions within these systems are normally tied to a business object or process: purchase order/customer, employee or department, customer and supplier, customer and product, respectively. Therefore, the activities or "logs" as you name them are able to be "typed" so that as a viewer, you can filter or decide to see only certain types of information. In your example, you could have a view that only showed you recent "configuration" changes and this could be right next to a view that showed off the items to be watched every morning. Q: It's great to hear about customers using the software but is there any plan for future webinars to show what the products/installs look like? That would be very helpful.A: We don't have a webinar planned to show off the install process. However, we have a viewlet that's posted on Oracle Technology Network. You can see it here:http://www.oracle.com/technetwork/testcontent/wcs-install-098014.htmlAnd we've got excellent documentation that walks you through the steps here:http://download.oracle.com/docs/cd/E14571_01/install.1111/e12001/install.htmAnd there's a whole set of demos and examples of what WebCenter can do at this URL:http://www.oracle.com/technetwork/middleware/webcenter/release11-demos-097468.html Q: How do you anticipate managing metadata across the enterprise to make content findable?A: We need to first make sure we are all talking about the same thing when we use a word like "metadata". Here's why...  For a developer, metadata means information that describes key elements of the portal or application and what the portal or application can do. For content systems, metadata means key terms that provide a taxonomy or folksonomy about the information that is being indexed, ordered, and managed. For business intelligence systems, metadata means key terms that provide labels to groups of data that most non-mathematicians need to understand. And for SOA, metadata means labels for parts of the processes that business owners should understand that connect development terminology. There are also additional requirements for metadata to be available to the team building these new solutions as well as requirements to make this metadata available to the running system. These requirements are often separated by "design time" and "run time" respectively. So clearly, a general goal of managing metadata across the enterprise is very challenging. We've invested a huge amount of resources around Oracle Metadata Services (MDS) to be able to provide a more generic system for all of these elements. No other vendor has anything like this technology foundation in their products. This provides a huge benefit to our customers as they will now be able to find content, processes, people, and information from a common set of search interfaces with consistent enterprise wide results. Q: Can you give your definition of terms as to document and content, please?A: Content applies to a broad category of information from Word documents, presentations and reports through attachments to invoices and/or purchase orders. Content is essentially any type of digital asset including images, video, and voice. A document is just one type of content. Q: Do you have special integration tools to realize an interaction between UCM and WebCenter Spaces/Services?A: Yes, we've dedicated a whole team of engineers to exploit the key features of Oracle UCM within WebCenter.  While ensuring that WebCenter can connect to other non-Oracle systems, we've made sure that with the combined set of Oracle technology, no other solution can match the combined power and integration.  This is part of the Oracle Fusion Middleware strategy which is to provide best in class capabilities for Content and Portals.  When combined together, the synergy between the two products enables users to quickly add capabilities when they are needed.  For example, simple document sharing is part of the combined product offering, but if legal discovery or archiving is required, Oracle UCM product includes these capabilities that can be quickly added.  There's no need to move content around or add another system to support this, it's just a feature that gets turned on within Oracle UCM. Q: All customers have some interaction with their applications and have many older versions, how do you see some of these new Enterprise 2.0 capabilities adding value to existing enterprise application deployments?A: Just as Service Oriented Architectures allowed for connecting the processes of different applications systems to work together, there's a need for a similar approach with regards to these enterprise 2.0 capabilities. Oracle WebCenter is built on a core architecture that allows for SOA of these Enterprise 2.0 services so that one set of scalable services can be used and integrated directly into any type of application. In this way, users can get immediate value out of the Enterprise 2.0 capabilities without having to wait for the next major release or upgrade. These centrally managed WebCenter services expose a set of standard interfaces that make it extremely easy to add them into existing applications no matter what technology the application has been implemented. Q: We've heard about Oracle Next Generation applications called "Fusion Applications", can you tell me how all this works together?A: Oracle WebCenter powers the core collaboration and social computing services found within Fusion Applications. It is the core user experience technology for how all the application screens have been implemented. And the core concept of task flows allows for all the Fusion Applications modules to be adaptable and composable by business users and IT without needing to be a professional developer. Oracle WebCenter is at the heart of the new Fusion Applications. In addition, the same patterns and technologies are now being added to the existing applications including JD Edwards, Siebel, Peoplesoft, and eBusiness Suite. The core technology enables all these customers to have a much smoother upgrade path to Fusion Applications. They get immediate benefits of injecting new user interactions into their existing applications without having to completely move to Fusion Applications. And then when the time comes, their users will already be well versed in how the new capabilities work. Q: Does any of this work with non Oracle software? Other databases? Other application servers? etc.A: We have made sure that Oracle WebCenter delivers the broadest set of development choices so that no matter what technology you developers are using, WebCenter capabilities can be quickly and easily added to the site or application. In addition, we have certified Oracle WebCenter to run against non-Oracle databases like DB2 and SQLServer. We have stated plans for certification against MySQL as well. Later in CY 2011, Oracle will provide certification on non-Oracle application servers such as WebSphere and JBoss. Q: How do we balance User and IT requirements in regards to Enterprise 2.0 technologies?A: Wrong decisions are often made because employee knowledge is not tapped efficiently and opportunities to innovate are often missed because the right people do not work together. Collaboration amongst workers in the right business context is critical for success. While standalone Enterprise 2.0 technologies can improve collaboration for collaboration's sake, using social collaboration tools in the context of business applications and processes will improve business responsiveness and lead companies to a more competitive position. As these systems become more mission critical it is essential that they maintain the highest level of performance and availability while scaling to support larger communities. Q: What are the ways in which Enterprise 2.0 can improve business responsiveness?A: With a wide range of Enterprise 2.0 tools in the marketplace, CIOs need to deploy solutions that will meet the requirements from users as well as address the requirements from IT. Workers want a next-generation user experience that is personalized and aggregates their daily tools and tasks, while IT needs to ensure the solution is secure, scalable, flexible, reliable and easily integrated with existing systems. An open and integrated approach to deploying portals, content management, and collaboration can enhance your business by addressing both the needs of knowledge workers for better information and the IT mandate to conserve resources by simplifying, consolidating and centralizing infrastructure and administration.  

    Read the article

  • Rendering HTML in Java

    - by ferronrsmith
    I am trying to create a help panel for an application I am working on. The help file as already been created using html technology and I would like it to be rendered in a pane and shown. All the code I have seen shows how to render a site e.g. "http://google.com". I want to render a file from my pc e.g. "file://c:\tutorial.html" This is the code i have, but it doesn't seem to be working. import javax.swing.JEditorPane; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JScrollPane; import javax.swing.SwingUtilities; import java.awt.Color; import java.awt.Container; import java.io.IOException; import static java.lang.System.err; import static java.lang.System.out; final class TestHTMLRendering { // ------------------------------ CONSTANTS ------------------------------ /** * height of frame in pixels */ private static final int height = 1000; /** * width of frame in pixels */ private static final int width = 1000; private static final String RELEASE_DATE = "2007-10-04"; /** * title for frame */ private static final String TITLE_STRING = "HTML Rendering"; /** * URL of page we want to display */ private static final String URL = "file://C:\\print.html"; /** * program version */ private static final String VERSION_STRING = "1.0"; // --------------------------- main() method --------------------------- /** * Debugging harness for a JFrame * * @param args command line arguments are ignored. */ @SuppressWarnings( { "UnusedParameters" } ) public static void main( String args[] ) { // Invoke the run method on the Swing event dispatch thread // Sun now recommends you call ALL your GUI methods on the Swing // event thread, even the initial setup. // Could also use invokeAndWait and catch exceptions SwingUtilities.invokeLater( new Runnable() { /** * } fire up a JFrame on the Swing thread */ public void run() { out.println( "Starting" ); final JFrame jframe = new JFrame( TITLE_STRING + " " + VERSION_STRING ); Container contentPane = jframe.getContentPane(); jframe.setSize( width, height ); contentPane.setBackground( Color.YELLOW ); contentPane.setForeground( Color.BLUE ); jframe.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); try { out.println( "acquiring URL" ); JEditorPane jep = new JEditorPane( URL ); out.println( "URL acquired" ); JScrollPane jsp = new JScrollPane( jep, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED ); contentPane.add( jsp ); } catch ( IOException e ) { err.println( "can't find URL" ); contentPane.add( new JLabel( "can't find URL" ) ); } jframe.validate(); jframe.setVisible( true ); // Shows page, with HTML comments erroneously displayed. // The links are not clickable. } } ); }// end main }// end TestHTMLRendering

    Read the article

  • capture data from FORM using jquery/ajax/json

    - by nisardotnet
    i have few textbox on the form and when the user submit i want to capture the data and insert into db here is what my code looks like beforeSubmit: function(data) { // called just before the form is submitted var item = $("[id$='item']"); var category = $("[id$='category']"); var record = $("[id$='record']"); var json = "{'ItemName':'" + escape(item.val()) + "','CategoryID':'" + category.val() + "','RecordID':'" + record.val() + "'}"; var ajaxPage = "DataProcessor.aspx?Save=1"; //this page is where data is to be retrieved and processed var options = { type: "POST", url: ajaxPage, data: json, contentType: "application/json;charset=utf-8", dataType: "json", async: false, success: function(response) { alert("success: " + response); }, error: function(msg) { alert("failed: " + msg); } }; //execute the ajax call and get a response var returnText = $.ajax(options).responseText; if (returnText == 1) { record.html(returnText); $("#divMsg").html("<font color=blue>Record saved successfully.</font>"); } else { record.html(returnText); $("#divMsg").html("<font color=red>Record not saved successfully.</font>"); } // $("#data").html("<font color=blue>Data sent to the server :</font> <br />" + $.param(data)); }, here is what is the Data sent to the server: if i uncomment this line: // $("#data").html("Data sent to the server : " + $.param(data)); _VIEWSTATE=%2FwEPDwULLTE4ODM1ODM4NDFkZOFEQfA7cHuTisEwOQmIaj1nYR23&_EVENTVALIDATION=%2FwEWDwLuksaHBgLniKOABAKV8o75BgLlosbxAgKUjpHvCALf9YLVCgLCtfnhAQKyqcC9BQL357nNAQLW9%2FeuDQKvpuq2CALyveCRDwKgoPWXDAKhwImNCwKiwImNC1%2Fq%2BmUXqcSuJ0z0F%2FQXKM3pH070&firstname=Nisar&surname=Khan&day_fi=12&month_fi=12&year_fi=1234&lastFour_fi=777&countryPrefix_fi=1&areaCode_fi=555-555&phoneNumber_fi=5555&email_fi=nisardotnet%40gmail.com&username=nisarkhan&password=123456&retypePassword=123456 DataProcessor.aspx.cs: public partial class DataProcessor : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) ProcessAjaxRequest(); } private void ProcessAjaxRequest() { if (Request.ContentType.Contains("json") && Request.QueryString["Save"] != null) SaveMyData(); } private void SaveMyData() { //data passed in as JSON format System.IO.StreamReader sr = new System.IO.StreamReader(Request.InputStream); string line = ""; line = sr.ReadToEnd(); //This is all you need to parse JSON string into an JObject. //Require namespace Newtonsoft.Json.Linq; JObject jo = JObject.Parse(line); Console.WriteLine((string)jo["RecordID"]); Console.WriteLine(Server.UrlDecode((string)jo["ItemName"])); //use Server.UrlDecode to reverse the text that was escaped before it was passed in to its original state Response.Write((string)jo["CategoryID"]); //this send to responseText of .ajax(options).responseText } } the above code is not working and i need a way to capture the values before i insert into db. any help? Thanks.

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • foreach loop corrupting my array?

    - by WmasterJ
    Explanation I have a multidimensional array that is iterated over to created a categorized view of people with different research interests. The main array look something like this: Array ( ... ['Cell Biology'] => Array(4 elements) ['Molecular'] => Array(6 elements) ['Biology Education'] => Array(14 elements) ['Plant Biology'] => Array(19 elements) <--- Last element in array ) I know that the entire array is intact and correctly structured. The only information that is inside these array is an user id, like so: Array ('Plant Biology') 19 elements ( [0] => 737 [1] => 742 [2] => 748 ... ) My problem is that after i run the main array through a foreach loop the last 'sub-array' gets messed up. By messed up I mean that what you see about instead look like: String (13 characters) 'Plant Biology' This is without doing at all anything inside the loop with to the array that gets corrupted. Any tips to what it might be? PHP Code // ---> Array is OK here echo "<h2>Research divided</h2>"; // Loop areas and list them in 2 columns foreach($research['areas'] as $area => $areaArray) { // ---> Here it is already corrupted $count = count($areaArray); if($count > 0) { echo "<h3>$area</h3><hr/>"; echo "<table class='area_list'><tr>"; // Loop users within areas, divided up in 2 columns for($i=0 ; $i<$count ; $i++) { $uid = $areaArray[$i]; echo "<li>$uid</li>"; } echo "</tr></table>"; } }

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • SQL Server stored procedures - update column based on variable name..?

    - by ClarkeyBoy
    Hi, I have a data driven site with many stored procedures. What I want to eventually be able to do is to say something like: For Each @variable in sproc inputs UPDATE @TableName SET @variable.toString = @variable Next I would like it to be able to accept any number of arguments. It will basically loop through all of the inputs and update the column with the name of the variable with the value of the variable - for example column "Name" would be updated with the value of @Name. I would like to basically have one stored procedure for updating and one for creating. However to do this I will need to be able to convert the actual name of a variable, not the value, to a string. Question 1: Is it possible to do this in T-SQL, and if so how? Question 2: Are there any major drawbacks to using something like this (like performance or CPU usage)? I know if a value is not valid then it will only prevent the update involving that variable and any subsequent ones, but all the data is validated in the vb.net code anyway so will always be valid on submitting to the database, and I will ensure that only variables where the column exists are able to be submitted. Many thanks in advance, Regards, Richard Clarke Edit: I know about using SQL strings and the risk of SQL injection attacks - I studied this a bit in my dissertation a few weeks ago. Basically the website uses an object oriented architecture. There are many classes - for example Product - which have many "Attributes" (I created my own class called Attribute, which has properties such as DataField, Name and Value where DataField is used to get or update data, Name is displayed on the administration frontend when creating or updating a Product and the Value, which may be displayed on the customer frontend, is set by the administrator. DataField is the field I will be using in the "UPDATE Blah SET @Field = @Value". I know this is probably confusing but its really complicated to explain - I have a really good understanding of the entire system in my head but I cant put it into words easily. Basically the structure is set up such that no user will be able to change the value of DataField or Name, but they can change Value. I think if I were to use dynamic parameterised SQL strings there will therefore be no risk of SQL injection attacks. I mean basically loop through all the attributes so that it ends up like: UPDATE Products SET [Name] = '@Name', Description = '@Description', Display = @Display Then loop through all the attributes again and add the parameter values - this will have the same effect as using stored procedures, right?? I dont mind adding to the page load time since this is mainly going to affect the administration frontend, and will marginly affect the customer frontend.

    Read the article

  • Unable to enable wireless on a Vostro 2520

    - by Joe
    I have a Vostro 2520 and not sure how to enable wireless on my machine. The details are given below, would appreciate any pointers to resolving this issue. lsmod returns Module Size Used by ath9k 132390 0 ath9k_common 14053 1 ath9k ath9k_hw 411151 2 ath9k,ath9k_common ath 24067 3 ath9k,ath9k_common,ath9k_hw b43 365785 0 mac80211 506816 2 ath9k,b43 cfg80211 205544 4 ath9k,ath,b43,mac80211 bcma 26696 1 b43 ssb 52752 1 b43 ndiswrapper 282628 0 ums_realtek 18248 0 usb_storage 49198 1 ums_realtek uas 18180 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_cirrus 24002 1 joydev 17693 0 parport_pc 32866 0 ppdev 17113 0 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep psmouse 97362 0 dell_wmi 12681 0 sparse_keymap 13890 1 dell_wmi snd_hda_intel 33773 3 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_cirrus,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq wmi 19256 1 dell_wmi snd 78855 16 snd_hda_codec_hdmi,snd_hda_codec_cirrus,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device mac_hid 13253 0 i915 473240 3 drm_kms_helper 46978 1 i915 uvcvideo 72627 0 drm 242038 4 i915,drm_kms_helper videodev 98259 1 uvcvideo soundcore 15091 1 snd dell_laptop 18119 0 dcdbas 14490 1 dell_laptop i2c_algo_bit 13423 1 i915 v4l2_compat_ioctl32 17128 1 videodev snd_page_alloc 18529 2 snd_hda_intel,snd_pcm video 19596 1 i915 serio_raw 13211 0 mei 41616 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62099 0 sudo lshw -class network *-network UNCLAIMED description: Network controller product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:07:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f7c00000-f7c07fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 07 serial: 78:45:c4:a3:aa:65 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=192.168.1.5 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:e000(size=256) memory:f0004000-f0004fff memory:f0000000-f0003fff rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes Output of lspci > 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev > 09) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge > Graphics Controller (rev 09) 00:16.0 Communication controller: Intel > Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB > controller: Intel Corporation Panther Point USB Enhanced Host > Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther > Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: > Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) > 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root > Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point > PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel > Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) > 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller > (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 > port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel > Corporation Panther Point SMBus Controller (rev 04) 07:00.0 Network > controller: Broadcom Corporation Device 4365 (rev 01) 09:00.0 Ethernet > controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express > Gigabit Ethernet controller (rev 07) Output of lspci -v 0:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0 Capabilities: <access denied> Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0, IRQ 43 Memory at f7800000 (64-bit, non-prefetchable) [size=4M] Memory at e0000000 (64-bit, prefetchable) [size=256M] I/O ports at f000 [size=64] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0, IRQ 42 Memory at f7d0a000 (64-bit, non-prefetchable) [size=16] Capabilities: <access denied> Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) (prog-if 20 [EHCI]) Subsystem: Dell Device 0558 Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at f7d08000 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7d00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 Memory behind bridge: f7c00000-f7cfffff Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=09, subordinate=09, sec-latency=0 I/O behind bridge: 0000e000-0000efff Prefetchable memory behind bridge: 00000000f0000000-00000000f00fffff Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) (prog-if 20 [EHCI]) Subsystem: Dell Device 0558 Flags: bus master, medium devsel, latency 0, IRQ 23 Memory at f7d07000 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) Subsystem: Dell Device 0558 Flags: bus master, medium devsel, latency 0 Capabilities: <access denied> Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) (prog-if 01 [AHCI 1.0]) Subsystem: Dell Device 0558 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 40 I/O ports at f0b0 [size=8] I/O ports at f0a0 [size=4] I/O ports at f090 [size=8] I/O ports at f080 [size=4] I/O ports at f060 [size=32] Memory at f7d06000 (32-bit, non-prefetchable) [size=2K] Capabilities: <access denied> Kernel driver in use: ahci 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) Subsystem: Dell Device 0558 Flags: medium devsel, IRQ 11 Memory at f7d05000 (64-bit, non-prefetchable) [size=256] I/O ports at f040 [size=32] Kernel modules: i2c-i801 07:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) Subsystem: Dell Device 0016 Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at f7c00000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> 09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0, IRQ 41 I/O ports at e000 [size=256] Memory at f0004000 (64-bit, prefetchable) [size=4K] Memory at f0000000 (64-bit, prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • Panning weirdness on an UserControl

    - by Matías
    Hello, I'm trying to build my own "PictureBox like" control adding some functionalities. For example, I want to be able to pan over a big image by simply clicking and dragging with the mouse. The problem seems to be on my OnMouseMove method. If I use the following code I get the drag speed and precision I want, but of course, when I release the mouse button and try to drag again the image is restored to its original position. using System.Drawing; using System.Windows.Forms; namespace Testing { public partial class ScrollablePictureBox : UserControl { private Image image; private bool centerImage; public Image Image { get { return image; } set { image = value; Invalidate(); } } public bool CenterImage { get { return centerImage; } set { centerImage = value; Invalidate(); } } public ScrollablePictureBox() { InitializeComponent(); SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer, true); Image = null; AutoScroll = true; AutoScrollMinSize = new Size(0, 0); } private Point clickPosition; private Point scrollPosition; protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); clickPosition.X = e.X; clickPosition.Y = e.Y; } protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X = clickPosition.X - e.X; scrollPosition.Y = clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); e.Graphics.FillRectangle(new Pen(BackColor).Brush, 0, 0, e.ClipRectangle.Width, e.ClipRectangle.Height); if (Image == null) return; int centeredX = AutoScrollPosition.X; int centeredY = AutoScrollPosition.Y; if (CenterImage) { //Something not relevant } AutoScrollMinSize = new Size(Image.Width, Image.Height); e.Graphics.DrawImage(Image, new RectangleF(centeredX, centeredY, Image.Width, Image.Height)); } } } But if I modify my OnMouseMove method to look like this: protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X += clickPosition.X - e.X; scrollPosition.Y += clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } ... you will see that the dragging is not smooth as before, and sometimes behaves weird (like with lag or something). What am I doing wrong? I've also tried removing all "base" calls on a desperate movement to solve this issue, haha, but again, it didn't work. Thanks for your time.

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

< Previous Page | 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728  | Next Page >