Search Results

Search found 29753 results on 1191 pages for 'best practices'.

Page 813/1191 | < Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >

  • Which ORM to use with SQL Azure?

    - by Jamie Chapman
    Just wondering what everyones thoughts on what ORM to use for SQL Azure? I'm fairly comfortable using LINQ-to-SQL and I believe it is possible to get it working with SQL Azure. However, from my understanding (correct me if I'm wrong), no further improvements will be made to Linq-to-SQL in future releases of the .NET framework? Alternatively, there is the entity framework... and further afield from the Microsoft Camp is NHibernate. Ideally, any additional suggestions made should be free or open source. I have seen Telerik's ORM but this of course, is a commercial product. I can get the definitions/benefits of each ORM myself by doing a Google search, but I was just interested in peoples opinions as to which ORM seems to work best for them (even if it is none of the above).

    Read the article

  • How can I get TFS 2010 to build each project to a separate directory?

    - by Jonathan Schuster
    In our project, we'd like to have our TFS build put each project into its own folder under the drop folder, instead of dropping all of the files into one flat structure. To illustrate, we'd like to see something like this: DropFolder/ Foo/ foo.exe Bar/ bar.dll Baz baz.dll This is basically the same question as was asked here, but now that we're using workflow-based builds, those solutions don't seem to work. The solution using the CustomizableOutDir property looked like it would work best for us, but I can't get that property to be recognized. I customized our workflow to pass it in to MSBuild as a command line argument (/p:CustomizableOutDir=true), but it seems MSBuild just ignores it and puts the output into the OutDir given by the workflow. I looked at the build logs, and I can see that the CustomizableOutDir and OutDir properties are both getting set in the command line args to MSBuild. I still need OutDir to be passed in so that I can copy my files to TeamBuildOutDir at the end. Any idea why my CustomizableOutDir parameter isn't getting recognized, or if there's a better way to achieve this?

    Read the article

  • Good GUI Design Tool to mock up iPhone & Android applications

    - by Peter Delaney
    I am about to embark on developing a mobile application for both the iPhone and the Android based phone. I have most of my gui mock ups written down on a white board and some in my head. I need to put them down on paper so I can relay my designs to others in my team. I was wondering what the best on-line gui mockup tool I could use? I have a lot of experience with Visio and I truly like it. I want a tool that would be web based so I could collaborate with others on the team. Does anyone have a suggestion for an easy gui mockup tool that I could use?

    Read the article

  • UIScrollView issue with autorotation and content scaling

    - by boliva
    Hi, I'm building a new (autorotating) iPad app that consists mainly of a screen sized UIScrollView that contains an UIImageView for an image which is 5 times the iPad screen resolution while in portrait mode (3840x1024). What I haven't been able to accomplish is that whenever the device rotates (to whichever orientation) the imageView bounds and the scrollView contentSize asjusts the image for the new height (maintaining its aspect ratio), making the height of the image to always fit the device height for the given orientation (so in the particular case of this image, it would get shown as 2880x768). I tried different combinations of autoresizingMask and contentMode for the imageView but the closer I've been able to get is to effectively display the image at the desired size but with a white padding around it and into the scrollView content (to make up for the original orientation contentSize). I also tried recalculating the scrollView contentSize in the didRotate.../viewWillAnimate... rotation-related methods in my viewController class, with no avail. Best regards and thanks for your time

    Read the article

  • Composite Key Dictionary

    - by AaronLS
    I have some objects in List, let's say List<MyClass> and MyClass has several properties. I would like to create an index of the list based on 3 properties of of MyClass. In this case 2 of the properties are int's, and one property is a datetime. Basically I would like to be able to do something like: Dictionary< CompositeKey , MyClass > MyClassListIndex = Dictionary< CompositeKey , MyClass >(); //Populate dictionary with items from the List<MyClass> MyClassList MyClass aMyClass = Dicitonary[(keyTripletHere)]; I sometimes create multiple dictionaries on a list to index different properties of the classes it holds. I am not sure how best to handle composite keys though. I considered doing a checksum of the three values but this runs the risk of collisions.

    Read the article

  • X.509 Certificate validation with Java and Bouncycastle

    - by Rob
    Hi, through the bouncycastle wiki page I was able to understand how to create a X.509 root certificate and a certification request, but I do not quite understand how to proceed concept- and programming wise after that. Lets assume party A does a cert request and gets his client certificate from the CA. How can some party B validate A's certificate? What kind of certificate does A need? A root certificate? A 'normal' client certificate? And how does the validation work on programming level, if we assume that A has successfully send his certificate in DER or PEM format to B? Any help is much appreciated. Best Regards, Rob

    Read the article

  • Silverlight DataGrid set cell IsReadOnly programatically

    - by Brandon Montgomery
    I am binding a data grid to a collection of Task objects. A particular column needs some special rules pertaining to editing: <!--Percent Complete--> <data:DataGridTextColumn Header="%" ElementStyle="{StaticResource RightAlignStyle}" Binding="{Binding PercentComplete, Mode=TwoWay, Converter={StaticResource PercentConverter}}" /> What I want to do is set the IsReadOnly property only for each task's percent complete cell based on a property on the actual Task object. I've tried this: <!--Percent Complete--> <data:DataGridTextColumn Header="%" ElementStyle="{StaticResource RightAlignStyle}" Binding="{Binding PercentComplete, Mode=TwoWay, Converter={StaticResource PercentConverter}}" IsReadOnly={Binding IsNotLocalID} /> but apparently you can't bind to the IsReadOnly property on a data grid column. What is the best way do to do what I am trying to do?

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • ColdFusion 9 Cluster with IIS7.5

    - by Adam Winter
    Does anyone know of a step by step process of setting up a ColdFusion 9 cluster using IIS 7.5? Either failover or network load balance would be nice. Without IIS being clusterable in Windows 2008 R2, I'm not sure of the best means to configure the web server and ColdFusion service. Some of the things I'm looking for are..... With load balancing, what do you use out in front of the servers as a load balancer? If you're using a network share on a clustered file server for the website data files, how do you configure the ColdFusion service to run so that it has network access instead of running as Local System?

    Read the article

  • Forcing a mixed ISO-8859-1 and UTF-8 multi-line string into UTF-8

    - by knorv
    Consider the following problem: A multi-line string $junk contains some lines which are encoded in UTF-8 and some in ISO-8859-1. I don't know a priori which lines are in which encoding, so heuristics will be needed. I want to turn $junk into pure UTF-8 with proper re-encoding of the ISO-8859-1 lines. Also, in the event of errors in the processing I want to provide a "best effort result" rather than throwing an error. My current attempt looks like this: $junk = &force_utf8($junk); sub force_utf8 { my $input = shift; my $output = ''; foreach my $line (split(/\n/, $input)) { if (utf8::valid($line)) { utf8::decode($line); } $output .= "$line\n"; } return $output; } While this appears to work I'm certain this is not the optimal solution. How would you improve my force_utf8(...) sub?

    Read the article

  • I can't get onChange to fire for dijit.form.Select

    - by user306462
    I seem unable to correctly attach the onchange event to a dijit.form.Select widget. However, I am new to web development, so I could be doing something completely idiotic (although, as best I can tell (and I've read all the docs I could find) I'm not). I made sure the body class matches the dojo theme, that I dojo.require() for all the widgets I use (and dojo.parser), and still, nada. The code I'm using is: dojo.addOnLoad (function () { var _query = dojo.query('.toggle'); for (var i=0; i < _query.length; i++) { dojo.connect(_query[i], 'onchange', function (ev) { console.log(ev + ' fired onchange'); }); } }); Any help at all would be appreciated.

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Fluent NHibernate and PostgreSQL, SchemaMetadataUpdater.QuoteTableAndColumns - System.NotSupportedEx

    - by Vyacheslav
    Hello! I'm using fluentnhibernate with PostgreSQL. Fluentnhibernate is last version. PosrgreSQL version is 8.4. My code for create ISessionFactory: public static ISessionFactory CreateSessionFactory() { string connectionString = ConfigurationManager.ConnectionStrings["PostgreConnectionString"].ConnectionString; IPersistenceConfigurer config = PostgreSQLConfiguration.PostgreSQL82.ConnectionString(connectionString); FluentConfiguration configuration = Fluently .Configure() .Database(config) .Mappings(m => m.FluentMappings.Add(typeof(ResourceMap)) .Add(typeof(TaskMap)) .Add(typeof(PluginMap))); var nhibConfig = configuration.BuildConfiguration(); SchemaMetadataUpdater.QuoteTableAndColumns(nhibConfig); return configuration.BuildSessionFactory(); } When I'm execute code at line SchemaMetadataUpdater.QuoteTableAndColumns(nhibConfig); throw error: System.NotSupportedException: Specified method is not supported. Help me, please! I'm very need for solution. Best regards

    Read the article

  • Question about the MIT License and open/closed source

    - by stck777
    Hallo, I have a small useful application (Websites managment tool developed in Java) which I want to publish online for free (every one can use it for free). But I do not want to make it opne source yet. It is additional affort to clean up the code, documantation and so on... And this is just a small useful tool. Can I use the MIT license in this case, or the MIT license obligates me to distribute the source code? Which license is best in this case? Thanks.

    Read the article

  • String contains string in objective-c (iphone)

    - by Jonathan
    How can I check if a string (NSString) contains another smaller string? I was hoping for something like: NSString *string = @"hello bla bla"; NSLog(@"%d",[string containsSubstring:@"hello"]); But the closest I could find was: if ([string rangeOfString:@"hello"] == 0) { NSLog(@sub string doesnt exist") } else { NSLog(@"exists") } I typed that straight into stack so sorry if there are errors, but there would be if I was doing it in Xcode so you don't need to point any out. Anyway is that the best way to find if a string contains another string.

    Read the article

  • session variables lost between pages or use same variables

    - by user222333
    Hi, Yesterday I learn many thing from you, especially from Marc and my problem was solved ( session variables lost between pages or use same variables ). But now I continue asking: I don't want to use Session ID(session.use_trans_sid = 1) between pages. But also I don't want to use same session variables for different users at same application and also I don't want lost session variables between pages for same user. Is it possible? If yes how? Thanks for everybody for any help. Best regards. I have Wamp Server(2.2.11) with PHP(5.2.9.-2). My php.ini's session settings at below: [Session] session.save_handler = files session.save_path = "c:/wamp/tmp" session.use_cookies = 0 ;session.cookie_secure = ;session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = 0 session.bug_compat_warn = 1 session.referer_check = session.entropy_length = 0 session.entropy_file = ;session.entropy_length = 16 ;session.entropy_file = /dev/urandom session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 1 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry"

    Read the article

  • Can't install psycopg2 (Python2.6 Ubuntu9.10 PostgreSQL8.4.2)

    - by yuv
    $ ~/psycopg2-2.0.13$ python setup.py install running install running build running build_py creating build creating build/lib.linux-x86_64-2.6 creating build/lib.linux-x86_64-2.6/psycopg2 copying lib/init.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/pool.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/extensions.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/errorcodes.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/extras.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/tz.py - build/lib.linux-x86_64-2.6/psycopg2 copying lib/psycopg1.py - build/lib.linux-x86_64-2.6/psycopg2 running build_ext error: No such file or directory or $ sudo easy_install psycopg2==2.0.13 Searching for psycopg2==2.0.13 Reading http;//pypi.python.org/simple/psycopg2/ Reading http;//initd.org/projects/psycopg2 Reading http;//initd.org/pub/software/psycopg/ Best match: psycopg2 2.0.13 Downloading http;//initd.org/pub/software/psycopg/psycopg2-2.0.13.tar.gz Processing psycopg2-2.0.13.tar.gz Running psycopg2-2.0.13/setup.py -q bdist_egg --dist-dir /tmp/easy_install-QuViWt/psycopg2-2.0.13/egg-dist-tmp-P2W9Dh error: Setup script exited with error: No such file or directory thanks in advance

    Read the article

  • faking a filesystem / virtual filesystem

    - by attwad
    I have a web service to which users upload python scripts that are run on a server. Those scripts process files that are on the server and I want them to be able to see only a certain hierarchy of the server's filesystem (best: a temporary folder on which I copy the files I want processed and the scripts). The server will ultimately be a linux based one but if a solution is also possible on Windows it would be nice to know how. What I though of is creating a user with restricted access to folders of the FS - ultimately only the folder containing the scripts and files - and launch the python interpreter using this user. Can someone give me a better alternative? as relying only on this makes me feel insecure, I would like a real sandboxing or virtual FS feature where I could run safely untrusted code.

    Read the article

  • Maven Plugin - Restart Jetty with new WAR?

    - by Walter White
    Hi all, What I would like to do is automatically test against several different maven build profiles. I want to write a maven plugin that iterates through each profile so I don't have to manually list them for the CI process. I just want to verify that the code works in all development, testing, staging, and production once deployed there. I want it to automatically test against those profiles so I could keep it a part of the same maven build? How would I best set that up to log those changes in Sonar or another tool? Walter

    Read the article

  • Cocoa Application with Menubar but no Dock Icon / switch menu

    - by Frank R.
    Hi, This is yet one more of those "how to switch from running with a dock icon to running without one" questions with a twist.. I don't want the dock icon but I do want a menu bar when the application is at the front. Is that possible? Running an application with LSUIElement set to 1 in the plist will launch the application without a dock icon, not showing up in the command-tab switch list and without a menu. You can switch from that mode to the "normal" mode with all three switched on via SetSystemModeUI from 10.2 onwards and via NSApplication setApplicationActivationPolicy since 10.6, but crucially there is no way back to the previous mode (go figure). So one way around this would be to launch with LSUIElement = 1 and then activate the menu bar when the application gets the focus and deactivate it on the application losing the focus.. alas I can't find a way of doing that. Can anybody help? Best regards, Frank

    Read the article

  • Setting freemarker template from classpath

    - by wuntee
    I have a web application that I need to manually obtain a Freemarker template - the template is obtained via a class in a library project, but the actual tpl file is contained in the web application classpath. So, there are 2 projects, one 'taac-backend-api' and another 'taac-web'; taac-backend-api has the code to grab the template, and process it, but taac-web is where the template is stores (specifically in: WEB-INF/classes/email/vendor.tpl) - I have tried everything from using springs classpath resource to using Freemarkers setClassForTemplateLoading method. I assume this would work: freemarkerConfiguration = new Configuration(); freemarkerConfiguration.setClassForTemplateLoading(this.getClass(), ""); Template freemarkerTemplate = freemarkerConfiguration.getTemplate("/email/vendor.tpl"); yet, I always get a FileNotFoundException. Can someone explain the best way to obtain a template from the classpath? Thanks.

    Read the article

  • How to avoid circular relationship in SQL-Server?

    - by Shimmy
    I am creating a self-related table: Table Item columns: ItemId int - PK; Amount money - not null; Price money - a computed column using a UDF that retrieves value according to the items ancestors' Amount. ParentItemId int - nullable, reference to another ItemId in this table. I need to avoid a loop, meaning, a sibling cannot become an ancestor of his ancestors, meaning, if ItemId=2 ParentItemId = 1, then ItemId 1 ParentItemId = 2 shouldn't be allowed. I don't know what should be the best practice in this situation. I think I should add a CK that gets a Scalar value from a UDF or whatever else. EDIT: Another option is to create an INSTEAD OF trigger and put in 1 transaction the update of the ParentItemId field and selecting the Price field from the @@RowIdentity, if it fails cancel transaction, but I would prefer a UDF validating. Any ideas are sincerely welcomed.

    Read the article

  • Avoiding .NET versioning hell

    - by moogs
    So sometimes (oftentimes!) you want to target a specific .NET version (say 3.0), but then due to some .NET service packs you get into problems like: Dispatcher.BeginInvoke(Delegate, Object[]) <-- this was added in 3.0 SP2 (3.0.30618 ) System.Threading.WaitHandle.WaitOne(Int32) <-- this was added in 3.5 SP1, 3.0 SP2, 2.0 SP2 Now, these are detected by the JIT compiler, so building against .NET 3.0 in Visual Studio won't guarante it will run on .NET 3.0 only systems. Short of confirming each and every function you use, or limiting your development environment to .NET 3.0 (which sucks since you have to develop for other projects too) what's the best way to avoid against using extensions? Thanks!

    Read the article

  • WCF REST vs. ADO.NET Data Services

    - by ray247
    Hi there, Could someone compare and contrast on WCF Rest services vs. ADO.NET Data Services? What is the difference and when to use which? Thanks, Ray. Edit: thanks to the first answer, just to give a bit background on what I'm looking to do: I have a web app I plan to put in the cloud (someday), the DAL is built with ADO.NET Entity Framework. And, I need to figure which web service data access technology would best fit my case.

    Read the article

  • .NET Build Process

    - by Nix
    All I am looking for the best free set of tools to be used in a MS Based build process. Checkout, Build, Package, Test, Deploy, etc. I know this question has been asked before but it was over 2 years ago, and in our world that is an eternity. I am looking to develop a pattern that is easily adapted to similar projects. Almost like a template/cookie cutter system. I am currently looking into using CruiseControl, Powershell, MSBuild suite of tools. If we choose to move to 4.0 will we have issues? Are there better alternatives? Limitations ? Or will these pretty much meet my needs. One piece that i am never happy with is the process of packaging. We actually have opted in the past to just use Visual Studio Deployment Projects but those are very* ancient and my fear is WIX will be too complicated for the people implementing it.

    Read the article

< Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >