Search Results

Search found 66534 results on 2662 pages for 'document set'.

Page 488/2662 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • OUM is Flexible and Scalable

    - by user535886
    Flexible and Scalable Traditionally, projects have been focused on satisfying the contents of a requirements document or rigorously conforming to an existing set of work products. Often, especially where iterative and incremental techniques have not been employed, these requirements may be inaccurate, the previous deliverables may be flawed, or the business needs may have changed since the start of the project. Fitness for business purpose, derived from the Dynamic Systems Development Method (DSDM) framework, refers to the focus of delivering necessary functionality within a required timebox. The solution can be more rigorously engineered later, if such an approach is acceptable. Our collective experience shows that applying fit-for-purpose criteria, rather than tight adherence to requirements specifications, results in an information system that more closely meets the needs of the business. In OUM, this principle is extended to refer to the execution of the method processes themselves. Project managers and practitioners are encouraged to scale OUM to be fit-for-purpose for a given situation. It is rarely appropriate to execute every activity within OUM. OUM provides guidance for determining the core set of activities to be executed, the level of detail targeted in those activities and their associated tasks, and the frequency and type of end user deliverables. The project workplan should be developed from this core. The plan should then be scaled up, rather than tailored down, to the level of discipline appropriate to the identified risks and requirements. Even at the task level, models and work products should be completed only to the level of detail required for them to be fit-for-purpose within the current iteration or, at the project level, to suit the business needs of the enterprise and to meet the contractual obligations that govern the project. OUM provides well defined templates for many of its tasks. Use of these templates is optional as determined by the context of the project. Work products can easily be a model in a repository, a prototype, a checklist, a set of application code, or, in situations where a high degree of agility is warranted, simply the tacit knowledge contained in the brain of an analyst or practitioner. For further reading on agility, see Balancing Agility and Discipline: A guide fro the Perplexed.

    Read the article

  • Relative encapsulation design

    - by taher1992
    Let's say I am doing a 2D application with the following design: There is the Level object that manages the world, and there are world objects which are entities inside the Level object. A world object has a location and velocity, as well as size and a texture. However, a world object only exposes get properties. The set properties are private (or protected) and are only available to inherited classes. But of course, Level is responsible for these world objects, and must somehow be able to manipulate at least some of its private setters. But as of now, Level has no access, meaning world objects must change its private setters to public (violating encapsulation). How to tackle this problem? Should I just make everything public? Currently what I'm doing is having a inner class inside game object that does the set work. So when Level needs to update an objects location it goes something like this: void ChangeObject(GameObject targetObject, int newX, int newY){ // targetObject.SetX and targetObject.SetY cannot be set directly var setter = new GameObject.Setter(targetObject); setter.SetX(newX); setter.SetY(newY); } This code feels like overkill, but it doesn't feel right to have everything public so that anything can change an objects location for example.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Problems with creating XML with DOMDocument in PHP

    - by maralbjo
    The below code is fetched from php.net (http://docs.php.net/manual/en/domdocument.savexml.php). My problem is - it doesn't work. My only output from this is: "Saving all the document: Saving only the title part:". What am I missing here? $doc = new DOMDocument('1.0'); // we want a nice output $doc-formatOutput = true; $root = $doc-createElement('book'); $root = $doc-appendChild($root); $title = $doc-createElement('title'); $title = $root-appendChild($title); $text = $doc-createTextNode('This is the title'); $text = $title-appendChild($text); echo "Saving all the document:\n"; echo $doc-saveXML() . "\n"; echo "Saving only the title part:\n"; echo $doc-saveXML($title);

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • A Question About Embedding HTML In PHP

    - by Brian
    Hello all Some time ago I read a posting on a board where the person was talking poorly about people that have HTML embedded/within their PHP. I do quite a bit of PHP development but I still interleave HTML and PHP in the same document. Is there a better way to do this, am I doing it wrong? I know that in JSP/JSF they use an XML document with namespaces to insert their HTML code so I was wondering if there was a similar function that PHP uses that I should be taking advantage of. Thanks for taking the time to read. :-)

    Read the article

  • windows phone deserialization json

    - by user2042227
    I have a weird issue. so I am making a few calls in my app to a webservice, which replies with data. However I am using a token based login system, so the first time the user enters the app I get a token from the webservice to login for that specific user and that token returns only that users details. The problem I am having is when the user changes I need to make the calls again, to get the new user's details, but using visual studio's breakpoint debugging, it shows the new user's token making the call however the problem is when the json is getting deserialized, it is as if it still reads the old data and deserializes that, when I exit my app with the new user it works fine, so its as if it is reading cached values, but I have no idea how to clear it? I am sure the new calls are being made and the problem lies with the deserializing, but I have tried clearing the values before deserializing them again, however nothing works. am I missing something with the json deserializer, how van I clear its cached values? here I make the call and set it not to cache so it makes a new call everytime: client.Headers[HttpRequestHeader.CacheControl] = "no-cache"; var token_details = await client.DownloadStringTaskAsync(uri); and here I deserialize the result, it is at this section the old data gets shown, so the raw json being shown inside "token_details" is correct, only once I deserialize the token_details, it shows the wrong data. deserialized = JsonConvert.DeserializeObject(token_details); and the class I am deserializing into is a simple class nothing special happening here, I have even tried making the constructor so that it clears the values each time it gets called. public class test { public string status { get; set; } public string name{ get; set; } public string birthday{ get; set; } public string errorDes{ get; set; } public test() { status = ""; name= ""; birthday= ""; errorDes= ""; } } uri's before making the calls: {https://whatever.co.za/token/?code=BEBCg==&id=WP7&junk=121edcd5-ad4d-4185-bef0-22a4d27f2d0c} - old call "UBCg==" - old reply {https://whatever.co.za/token/?code=ABCg==&id=WP7&junk=56cc2285-a5b8-401e-be21-fec8259de6dd} - new call "UBCg==" - new response which is the same response as old call as you can see i did attach a new GUID everytime i make the call, but then the new uri is read before making the downloadstringtaskasync method call, but it returns with the old data

    Read the article

  • How to retrieve a style's value in javascript?

    - by stan
    I am looking for a way to retrieve the style from an element that has a style set upon it by the style tag. <style> #box {width: 100px;} </style> In the body <div id="box"></div> I'm looking for straight javascript without the use of libraries. I tried the following, but keep receiving blanks: alert (document.getElementById("box").style.width); alert (document.getElementById("box").style.getPropertyValue("width")); I noticed that I'm only able to use the above if I have set the style using javascript, but unable to with the style tags.

    Read the article

  • jQuery add class is not working

    - by user1269625
    Hey ya'll I have this code here and it suppose to add the class name "cboxElement" $(".wpcart_gallery a:first").addClass("cboxElement"); but it does not work. I have the proper Jquery file in my header, I have surrounded this in jQuery('document').ready(function($){......}); and it works for all my other Jquery but the add class call. here is what I am trying to add the class to... <div class="wpcart_gallery" style="text-align:center; padding-top:5px;"> <a class="thickbox" href="DSC_0037.jpg" rev="DSC_0037.jpg"></a> </div> Anybody know why this isnt working? I am new to jquery this is also in wordpress hence the jQuery('document')

    Read the article

  • XSF-FO intellisense and national languages with Apache FOP

    - by Lukasz Kurylo
    Some time ago I showed how to get an intellisense and how to configure the FO.NET to acquire national characters inside the generated pdf files. Due to the limitations that I mensioned in my previous post, I started playing with the Apache FOP. In this post I want to show, how to acquire the same result as I showed in the two posts related to FO.NET.   Intellisense   To get the intellisense from the XSL-FO templates set the xsi:schemaLocation the same way I showed it in this post. The only diffrence to FO.NET is that, during generating the document by the code I showed last time we will get an exception:   org.apache.fop.fo.ValidationException: Invalid property encountered on "fo:root": xsi:schemaLocation (See position 6:11)   Fortunatelly there is a very easy way to resolve this without removing the entire attribute along with the intellisense. Add to the FopFactory the ignoreNamespace by:   FopFactory fopFactory = FopFactory.newInstance(); fopFactory.ignoreNamespace(http://www.w3.org/2001/XMLSchema-instance);   Notice that, the url specified in this method this is a namespace for the xmlns:xsi namespace, not xsi.schemaLocation.   Fonts / national characters   This point is a little dfferent to acquire, but not more complicated that it was with FO.NET. To set the fonts in Apache FOP 1.0, we need a configuration file. A sample one can be get from the directory where we unpacked the fop binaries, from conf subdirectory. There is a file called fop.xconf. We must copy this file to our solution. In the simplest way, in the <fonts> tag we can add  <auto-detect/>. Thanks to this, FOP will index all fonts available on the installed operating system. There probably should be no problem, if we have a http handler or a WCF Service on the server that serves the generated pdf documents. In this situation we can use all available fonts on this server.   To use this config file, we must set a path to it:   FopFactory fopFactory = FopFactory.newInstance(); fopFactory.setUserConfig(new File("fop.xconf"));

    Read the article

  • itextsharp PdfCopy and landscape pages

    - by Andreas Rehm
    I'm using itextsharp to join mutiple pdf documents and add a footer. My code works fine - except for landscape pages - it isn't detecting the page rotation - the footer is not centerd for landscape: public static int AddPagesFromStream(Document document, PdfCopy pdfCopy, Stream m, bool addFooter, int detailPages, string footer, int footerPageNumOffset, int numPages, string pageLangString, string printLangString) { CreateFont(); try { m.Seek(0, SeekOrigin.Begin); var reader = new PdfReader(m); // get page count var pdfPages = reader.NumberOfPages; var i = 0; // add pages while (i < pdfPages) { i++; // import page with pdfcopy var page = pdfCopy.GetImportedPage(reader, i); // get page center float posX; float posY; var rotation = page.BoundingBox.Rotation; if (rotation == 0 || rotation == 180) { posX = page.Width / 2; posY = 0; } else { posX = page.Height / 2; posY = 20f; } var ps = pdfCopy.CreatePageStamp(page); var cb = ps.GetOverContent(); // add footer cb.SetColorFill(BaseColor.WHITE); var gs1 = new PdfGState {FillOpacity = 0.8f}; cb.SetGState(gs1); cb.Rectangle(0, 0, document.PageSize.Width, 46f + posY); cb.Fill(); // Text cb.SetColorFill(BaseColor.BLACK); cb.SetFontAndSize(baseFont, 7); cb.BeginText(); // create text var pages = string.Format(pageLangString, i + footerPageNumOffset, numPages); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, printLangString, posX, 40f + posY, 0f); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, footer, posX, 28f + posY, 0f); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, pages, posX, 20f + posY, 0f); cb.EndText(); ps.AlterContents(); // add page to new pdf pdfCopy.AddPage(page); } // close PdfReader reader.Close(); // return number of pages return i; } catch (Exception e) { Console.WriteLine(e); return 0; } } How do I detect the page rotation (e.g. landscape) format in this case? The given example works for PdfReader but not for PdfCopy. Edit: Why do I need PdfCopy? I tried copying a word pdf export. Some word hyperlinks will not work when you try to copy pages with PdfReader. Only PdfCopy transfers all needed page informations. Edit: (SOLVED) You need to use reader.GetPageRotation(i);

    Read the article

  • Sharepoint 2007: Disabling Edit/Read Only mode?

    - by TheGambler
    If I open a doc in read only mode I'm able to press save and then it opens up a save as box and the default directory is the directory on the sharepoint server and if you press save you save it to the server. This actually makes the whole process not really "read only" mode since I could actually update the document. Is there a way to prevent this from happening so that if someone chooses read only there is no way possible to updload any changes back to the sharepoint site? Also, it has been suggested as a solution to get rid of the edit/read only option so that people have to check out the document. Is there a way to remove the edit/read only option on documents?

    Read the article

  • XML validation in Java - why does this fail?

    - by jd
    hi, first time dealing with xml, so please be patient. the code below is probably evil in a million ways (I'd be very happy to hear about all of them), but the main problem is of course that it doesn't work :-) public class Test { private static final String JSDL_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl"; private static final String JSDL_POSIX_APPLICATION_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl-posix"; public static void main(String[] args) { System.out.println(Test.createJSDLDescription("/bin/echo", "hello world")); } private static String createJSDLDescription(String execName, String args) { Document jsdlJobDefinitionDocument = getJSDLJobDefinitionDocument(); String xmlString = null; // create the elements Element jobDescription = jsdlJobDefinitionDocument.createElement("JobDescription"); Element application = jsdlJobDefinitionDocument.createElement("Application"); Element posixApplication = jsdlJobDefinitionDocument.createElementNS(JSDL_POSIX_APPLICATION_SCHEMA_URL, "POSIXApplication"); Element executable = jsdlJobDefinitionDocument.createElement("Executable"); executable.setTextContent(execName); Element argument = jsdlJobDefinitionDocument.createElement("Argument"); argument.setTextContent(args); //join them into a tree posixApplication.appendChild(executable); posixApplication.appendChild(argument); application.appendChild(posixApplication); jobDescription.appendChild(application); jsdlJobDefinitionDocument.getDocumentElement().appendChild(jobDescription); DOMSource source = new DOMSource(jsdlJobDefinitionDocument); validateXML(source); try { Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); StreamResult result = new StreamResult(new StringWriter()); transformer.transform(source, result); xmlString = result.getWriter().toString(); } catch (Exception e) { e.printStackTrace(); } return xmlString; } private static Document getJSDLJobDefinitionDocument() { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = null; try { builder = factory.newDocumentBuilder(); } catch (Exception e) { e.printStackTrace(); } DOMImplementation domImpl = builder.getDOMImplementation(); Document theDocument = domImpl.createDocument(JSDL_SCHEMA_URL, "JobDefinition", null); return theDocument; } private static void validateXML(DOMSource source) { try { URL schemaFile = new URL(JSDL_SCHEMA_URL); Sche maFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = schemaFactory.newSchema(schemaFile); Validator validator = schema.newValidator(); DOMResult result = new DOMResult(); validator.validate(source, result); System.out.println("is valid"); } catch (Exception e) { e.printStackTrace(); } } } it spits out a somewhat odd message: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'JobDescription'. One of '{"http://schemas.ggf.org/jsdl/2005/11/jsdl":JobDescription}' is expected. Where am I going wrong here? Thanks a lot

    Read the article

  • Upload a file to SharePoint through the built-in web services

    - by Andy McCluggage
    What is the best way to upload a file to a Document Library on a SharePoint server through the built-in web services that version WSS 3.0 exposes? Following the two initial answers... We definitely need to use the Web Service layer as we will be making these calls from remote client applications. The WebDAV method would work for us, but we would prefer to be consistent with the web service integration method. There is additionally a web service to upload files, painful but works all the time. Are you referring to the “Copy” service? We have been successful with this service’s CopyIntoItems method. Would this be the recommended way to upload a file to Document Libraries using only the WSS web service API? I have posted our code as a suggested answer.

    Read the article

  • Object drag delay issue

    - by Johnny Darvall
    I have this code that drags stuff around perfectly in IE - however in firefox the onmousedown-drag of the object does not immediately drag but shows the no-entry cursor and then after onmouseup the object drags around freely. The object does stop draging on the next onmouseup. The object should only drag in the onmousdown state, while the onmousup call should cancel the drag by making j_OK=0. I think it may have something to do with the image inside... the object: <em style=position:absolute;left:0;top:0;width:32;height:32;display:block> < img src=abc.gif onmousedown=P_MV(this.parentNode) style=position:absolute;left:0;top:0;width:inherit> </em> function P_MV(t) { p_E=t j_oy=parseInt(p_E.style.top) j_ox=parseInt(p_E.style.left) j_OK=1 document.onselectstart=function(){return false} document.onmousemove=P_MVy } function P_MVy(e) { if(j_OK) { p_E.style.top=(j_FF?e.clientY:event.clientY)-j_y+j_oy p_E.style.left=(j_FF?e.clientX:event.clientX)-j_x+j_ox } return false }

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • Grouping a comma separated value on common data [closed]

    - by Ankit
    I have a table with col1 id int, col2 as varchar (comma separated values) and column 3 for assigning group to them. Table looks like col1 col2 group .............................. 1 2,3,4 2 5,6 3 1,2,5 4 7,8 5 11,3 6 22,8 This is only the sample of real data, now I have to assign a group no to them in such a way that output looks like col1 col2 group .............................. 1 2,3,4 1 2 5,6 1 3 1,2,5 1 4 7,8 2 5 11,3 1 6 22,8 2 The logic for assigning group no is that every similar comma separated value of string in col2 have to be same group no as every where in col2 where '2' is there it has to be same group no but the complication is that 2,3,4 are together so they all three int value if found in any where in col2 will be assigned same group. The major part is 2,3,4 and 1,2,5 both in col2 have 2 so all int 1,2,3,4,5 have to assign same group no. Tried store procedure with match against on col2 but not getting desired result Most imp (I can't use normalization, because I can't afford to make new table from my original table which have millions of records), even normalization is not helpful in my context. This question is also on stackoverflow with bounty on this link Achieved so far:- I have set the group column auto increment and then wrote this procedure:- BEGIN declare cil1_new,col2_new,group_new int; declare done tinyint default 0; declare group_new varchar(100); declare cur1 cursor for select col1,col2,`group` from company ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; open cur1; REPEAT fetch cur1 into col1_new,col2_new,group_new; update company set group=group_new where match(col2) against(concat("'",col2_new,"'")); until done end repeat; close cur1; select * from company; END This procedure is working, no syntax mistake but the problem is that I am not achieving the desired result exactly.

    Read the article

  • iptables unresolved dependencies

    - by tertle
    I'm trying to setup OpenVPN Access Server on a VPS running ubuntu 9.10 for a friend so she can play games from her uni campus. The problem is I keep running into this error when trying to start openvpn. Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Now I've already got my provider to enabled the TUN/TAP device driver and I checked this using # cat /dev/net/tun Which returned “File descriptor in bad state” Which I believe means it's enabled. After extensive searching, I've been unable to find any solution other than people suggesting to make sure TUN/TAP device driver is enabled. Any ideas on how to solve my issue? I'm not very experience with linux and I feel in over my head here so any advice is greatly appreciated. --edit-- Just stumbled across this Not sure how I missed it earlier. I believe I need to get modprobe ipt_mark & modprobe ipt_MARK run on the hostnode by my provider. Is this correct and something I should try get done.

    Read the article

  • This regx does not work only in Chrome

    - by Deeptechtons
    Hi i just put up a validation function in jScript to validate filename in fileupload control[input type file]. The function seems to work fine in FF and sometimes in ie but never in Chrome. Basically the function tests if File name is atleast 1 char upto 25 characters long.Contains only valid characters,numbers [no spaces] and are of file types in the list. Could you throw some light on this function validate(Uploadelem) { var objRgx = new RegExp(/^[\w]{1,25}\.*\.(jpg|gif|png|jpeg|doc|docx|pdf|txt|rtf)$/); objRgx.ignoreCase = true; if (objRgx.test(Uploadelem.value)) { document.getElementById('moreUploadsLink').style.display = 'block'; } else { document.getElementById('moreUploadsLink').style.display = 'none'; } }

    Read the article

  • How to select a names from a column MS Excel 2007

    - by Alks
    Hey guys, I have an Excel document which has a list of students and their group names. I have another sheet within the the same excel document which is called comments. In this sheet, I would like to have a list of individual team names listed. There are 65 students and 14 defined groups. Is there a way to select the 14 group names, without repitition? Cell B3-B67 have the student names. Cell C3-C67 have the team names. The team names are entered against each student. I know in SQL I could use something like select distinct(team_name) but in Excel, how can I replicate this? Cheers, Alks.

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • F# Simple Twitter Update

    - by mroberts
    A short while ago I posted some code for a C# twitter update.  I decided to move the same functionality / logic to F#.  Here is what I came up with. 1: namespace Server.Actions 2:   3: open System 4: open System.IO 5: open System.Net 6: open System.Text 7:   8: type public TwitterUpdate() = 9: 10: //member variables 11: [<DefaultValue>] val mutable _body : string 12: [<DefaultValue>] val mutable _userName : string 13: [<DefaultValue>] val mutable _password : string 14:   15: //Properties 16: member this.Body with get() = this._body and set(value) = this._body <- value 17: member this.UserName with get() = this._userName and set(value) = this._userName <- value 18: member this.Password with get() = this._password and set(value) = this._password <- value 19:   20: //Methods 21: member this.Execute() = 22: let login = String.Format("{0}:{1}", this._userName, this._password) 23: let creds = Convert.ToBase64String(Encoding.ASCII.GetBytes(login)) 24: let tweet = Encoding.ASCII.GetBytes(String.Format("status={0}", this._body)) 25: let request = WebRequest.Create("http://twitter.com/statuses/update.xml") :?> HttpWebRequest 26: 27: request.Method <- "POST" 28: request.ServicePoint.Expect100Continue <- false 29: request.Headers.Add("Authorization", String.Format("Basic {0}", creds)) 30: request.ContentType <- "application/x-www-form-urlencoded" 31: request.ContentLength <- int64 tweet.Length 32: 33: let reqStream = request.GetRequestStream() 34: reqStream.Write(tweet, 0, tweet.Length) 35: reqStream.Close() 36:   37: let response = request.GetResponse() :?> HttpWebResponse 38:   39: match response.StatusCode with 40: | HttpStatusCode.OK -> true 41: | _ -> false   While the above seems to work, it feels to me like it is not taking advantage of some functional concepts.  Love to get some feedback as to how to make the above more “functional” in nature.  For example, I don’t like the mutable properties.

    Read the article

  • https://www.googleapis.com/drive/v2/files/<fileid>/comments?alt=json returned "Not Found" on a file that can't be opened

    - by Kartik Ayyar
    More details as below: https://www.googleapis.com/drive/v2/files/1iNMGIAFXuhS_CO_hnEO0_EJ9PAgT-hXYqWYv0MPGUTI/comments?alt=json returned "Not Found The file is present in drive and shows in drive.changes.list, but can't be opened in Google Drive either. There are two problems here a) the file is somehow corrupt ( it was a document imported into drive, so that failed, but that isn't something I care about for the purposes of this question ) b) The file shows up as existing in some API calls, but calls to read comments with the Drive SDK comments API fail. Here are results from an API call showing how the file does indeed exist: "file": { "kind": "drive#file", "id": "1iNMGIAFXuhS_CO_hnEO0_EJ9PAgT-hXYqWYv0MPGUTI", "etag": "\"o35FABD0TC3H-Up3OL3UA9kEB2w/MTM3MTc2NzU5NzEyNA\"", .... .... "iconLink": "https://ssl.gstatic.com/docs/doclist/images/icon_11_document_list.png", "title": "<removed>", "mimeType": "application/vnd.google-apps.document", "labels": { "starred": false, "hidden": false, "trashed": true, "restricted": false, "viewed": true },

    Read the article

  • pdating modules on VPS [closed]

    - by tertle
    Been trying to install openVPN on a VPS but come into a few problems when trying to start the openvpn server. Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Anyway so after a bit of playing around and some advice, I found that the linux kernal and modules don't match on my server. uname -r returns: 2.6.18-028stab070.14 and ls /lib/modules returns: 2.6.18-028stab070.7 The server is running OpenVZ and my container uses ubuntu 9.10. So my question is, is it possible for me to update my modules on a VPS and if so how would I do this, or is this something I'll need to try get my host to do? Thanks in advance.

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >