Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 340/590 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Is there any good reason to use <rtexprvalue>false</rtexprvalue> in JSP tags?

    - by Superfilin
    Is there any good reason to disallow scriptlet or EL expression to be inserted as attribute value? Let's say we have tag: <tag> <name>mytag</name> <tag-class>org.apache.beehive.netui.tags.tree.Tree</tag-class> <attribute> <name>attr</name> <required>false</required> <rtexprvalue>false</rtexprvalue> <type>boolean</type> </attribute> </tag> What could be a good reason for dissallowing the below? <my:mytag attr="${setting}" />

    Read the article

  • MVC Authorize Attribute + HttpUnauthorizedResult + FormsAuthentication

    - by Anthony
    After browsing the MVC section on CodePlex I noticed that the [Authorize] attribute in MVC returns a HttpUnauthorizedResult() when authorization fails (codeplex AuthorizeAttribute class). In the source of HttpUnauthorizedResult() from CodePlex is the code (I'm not allowed to enter another URL as my rep isn't high enough, but replace the numbers on the URL above with 22929#266476): // 401 is the HTTP status code for unauthorized access - setting this // will cause the active authentication module to execute its default // unauthorized handler context.HttpContext.Response.StatusCode = 401; In particular, the comment describes the authentication module's default unauthorized handler. I can't seem to find any information on this default unauthorized handler. In particular, I'm not using FormsAuthentication and when authorization fails I get an ugly IIS 401 error page. Does anyone know about this default unauthorized handler, and in particular how FormsAuthentication hooks itself in to override it? I'm writing a really simple app for my football team who confirm or deny whether they can play a particular match. If I enable FormsAuthentication in the web.config the redirect works, but I'm not using FormsAuthentication and I'd like to know if there's a workaround.

    Read the article

  • WCF client service call very slow

    - by JohnIdol
    I have a WCF client (running on Win7) pointing to a WebSphere service. All is good from a test harness but when my calls to the service originate from an HttpHandler or a standard aspx page the calls are extremely slow (it takes up to 10 times longer) and not just the first time. If I send the exact same envelopes through Soap-UI or without an HttpHandler it's all good. If the envelope is the same the only thing left is the HttpHeader. What should I be looking for and is there some config setting that might do the trick? Any help appreciated!

    Read the article

  • TortoiseSVN svnadmin

    - by PetPaulsen
    Currently im setting up TortoiseSVN and reading through docs etc. The manual often mentions svnadmin. I figured out, that I have to download it seperatly. But the link seems to be old. After some browsing I got here. But I can't find a version 1.6.7, like my TortoiseSVN installation. Also I'm a little bit lost, because of the many files. So where can I get svnadmin from?

    Read the article

  • Shared Cookies between WebView and HTTPClient?

    - by Arpit
    An Android app I am building requires web authentication for users to make data calls. In Adobe AIR and later the iPhone, we did this by rendering a login page in a webview-equivalent page and setting a cookie when the user signs in. Subsequent data calls use the same Cookie Jar and so are seen as authenticated. In the Android version, I authenticate the user using a WebView and then once thats done, I make a data call using DefaultHttpClient, however I cant seem to load the data on the second call. Is there some cookie gotcha I am missing? I imagine the HTTPClient and WebView would share the same Cookie space. Am I wrong?

    Read the article

  • Sync Framework - In order to evaluate an indexed property, the property must be qualified and the ar

    - by Khor
    I trying to do a snapshot between SQLCE 3.5 SP1 and SQL Server 2008 by using the sync framework. I received a inner exception stated that "{"Unable to enumerate changes at the DbServerSyncProvider for table 'ArInvoice' in synchronization group 'ArInvoiceSyncTableSyncGroup'."}". The inner text for the inner exception stated that "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.". And the item for the inner exception indicated that "In order to evaluate an indexed property, the property must be qualified and the arguments must be explicitly supplied by the user.". What is going wrong? Any setting i miss out?

    Read the article

  • "NOT_SUPPORTED_BY_GUI" Exception in JCo

    - by cedar715
    We are having a BAPI that uploads the specified document to SAP. The BAPI accept three parameters: ID, FILE_LOC and FOLDER_NAME. And I'm setting the values as follows in the JCo code: JCO.ParameterList paramList = function.getImportParameterList(); paramList.setValue("101XS1", "EXTERNAL_ID"); paramList.setValue("tmp", "FOLDER_NAME"); paramList.setValue("D:/upload/foo.txt", "FILE_LOCATION"); But when I'm trying to execute the BAPI, am getting the following exception: com.sap.mw.jco.JCO$Exception: (104) RFC_ERROR_SYSTEM_FAILURE: Exception condition "NOT_SUPPORTED_BY_GUI" raised. at com.sap.mw.jco.rfc.MiddlewareRFC$Client.nativeExecute(Native Method) at com.sap.mw.jco.rfc.MiddlewareRFC$Client.execute(MiddlewareRFC.java:1242) at com.sap.mw.jco.JCO$Client.execute(JCO.java:3816) at com.sap.mw.jco.JCO$Client.execute(JCO.java:3261) The same BAPI is working fine if I execute through thick client(SAP Logon). But through JCo, its giving this error.

    Read the article

  • asp.net mvc changing one menu based on which controller is selected from another menu

    - by jj
    Hi, I am envisioning a site layout like this- top navigation menu linking to maybe 4 or 5 indexes of different controllers. each of these sections will be working with different model objects. Left navigation menu is specific to a controller. so, for each of the top menu buttons (corresponding to different controllers) I would like the left navigation menu to offer options only specific to the currently used controler. What's the best way to go about setting this up? Thanks!!

    Read the article

  • jQuery — Nested Sortables Plugin — Disabling sortability between parents

    - by AJB
    I've got a question that I think is simple but I've not been able to figure it out. This is in regard to this plugin: http://mjsarfatti.com/sandbox/nestedSortable/ Essentially, I want to disable the ability to sort children outside of their parents. So, I've got this: CATEGORY 1 ITEM 1.1 ITEM 1.2 ITEM 1.3 CATEGORY 2 ITEM 2.1 ITEM 2.2 ITEM 2.3 So, I'd like to provide the ability for users to sort the children within their category, and the ability to sort the categories themselves. But I want to disable the ability to move a child to another parent. (e.g. ITEM 1.1 cannot be moved to CATEGORY 2). And also I would like to disable the abilty to nest any parents in any children. I tried setting it so that the 'nestedSortable' function is called for every new OL but that simple disables sorting for everything entirely. Thanks for any help.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • jquery cycle "allowPagerClickBubble: true" not quite working

    - by Shelagh
    Hi all, I want my page anchors to work as menu links so I used this code: $(function() { $('#slideshow').cycle({ slideExpr: 'img', fx: 'fade', speed: 2000, timeout: 4000, pager: '#nav', pagerEvent: 'mouseover', pauseOnPagerHover: true, pagerAnchorBuilder: function(idx, slide) { // return sel string for existing anchor return '#nav li:eq(' + (idx) + ') a'; }, allowPagerClickBubble: true }); }); If you click the right mouse button, you can choose to open the links and they do open just fine but a normal left button click does nothing. The thing is, they WERE working so I went to work on supposedly unrelated parts of the website and at some point the left mouse button click stopped working. Any ideas what I might have done? Left button clicking still works for everything else on the site so its not some global setting. The cycle is here if my explanation is not clear: http://www.mcguirenaturals.ca/index.php Thanks for any help

    Read the article

  • ColdFusion 9 Cluster with IIS7.5

    - by Adam Winter
    Does anyone know of a step by step process of setting up a ColdFusion 9 cluster using IIS 7.5? Either failover or network load balance would be nice. Without IIS being clusterable in Windows 2008 R2, I'm not sure of the best means to configure the web server and ColdFusion service. Some of the things I'm looking for are..... With load balancing, what do you use out in front of the servers as a load balancer? If you're using a network share on a clustered file server for the website data files, how do you configure the ColdFusion service to run so that it has network access instead of running as Local System?

    Read the article

  • Initiating UserControl via MVVM in WPF / focus issue.

    - by benndev
    Hi there I have a few usercontrols loaded into a tabcontrol via MVVM in WPF. Within the XAML for the usercontrol I am setting focus to a textbox using the FocusManager, however this appears to only work when the first instance of the usercontrol is created. Just to test I added a loaded event handler to the usercontrol - this is only called on the first instance. I'm using data templates for the user controls as follows: <DataTemplate DataType="{x:Type local:UserTypeViewModel}"> <local:UserTypeView /> </DataTemplate> The textbox is focused as follows: FocusManager.FocusedElement="{Binding ElementName=txtName}" Additionally I'm using a global event handler (for the textbox GotFocus event) which selects all the text using a dispatcher. If anyone has any tips on how to achieve focus with every usercontrol I'd be very grateful. Thanks, Ben.

    Read the article

  • Configuring Team City internal.properties to increase git fetch memory

    - by 78lro
    When pulling from GIT my Team City install is getting an out of memory error. According to the Team City documentation I should be able to increase the memory assigned to the git fetch process, by setting the value for teamcity.git.fetch.process.max.memory to something greater than the default 512MB. http://confluence.jetbrains.net/display/TCD65/Git+%28JetBrains%29#Git%28JetBrains%29-InternalProperties Problem is there does not appear to be an internal.properties file in the location specified. I have tried creating one in the TeamCity/conf/internal.properties as suggested here: http://devnet.jetbrains.net/thread/302596 But I still get the out of memory issue when Team City tries to pull from github thx

    Read the article

  • Making ehcache read-write for test code and read-only for production code

    - by Rick
    I would like to annotate many of my Hibernate entities that contain reference data and/or configuration data with @Cache(usage = CacheConcurrencyStrategy.READ_ONLY) However, my JUnit tests are setting up and tearing down some of this reference/configuration data using the Hibernate entities. Is there a recommended way of having entities be read-write during test setup and teardown but read-only for production code? Two of my immediate thoughts for non-ideal workarounds are: Using NONSTRICT_READ_WRITE, but I am not sure what the hidden downsides are. Creating subclassed entities in my test code to override the read-only cache annotation. Any recommendations on the cleanest way to handle this? (Note: Project uses maven.)

    Read the article

  • iphone - Programmatically set (System-wide) proxy settings?

    - by Andrew
    I am new to iPhone development, so I'm sorry if this is a stupid question. I am developing an application whose purpose will be to route all iPhone activity through my company's proxy. Is there a way to programmatically set system-wide proxy settings in the iPhone (which will also take effect on the 3G connection)? I know there is a way to manually set proxy settings for each wifi connection. Detecting new networks and setting the proxy on them would be acceptable. However, I need to also be able to set the proxy on the 3G connection. Also, bonus: Is there a way to programmatically change the "Restrictions" settings? If anyone has any tips or can point me in the right direction, I would appreciate it. Thanks. EDIT: Please understand that this is for a legitimate purpose. Apple has to approve app store additions, so it's not like I'm trying to spread a virus. Please, constructive answers only.

    Read the article

  • How do I base a style on a Silverlight toolkit theme style

    - by Ian Oakes
    I've being trying to add a theme from the Silverlight toolkit to a project. In the project there are a number of existing styles used in the layout. The problem is when any control has an explict style applied to it does not receive any attributes of the style from the theme. In WPF I would use something like BasedOn={x:Type TextBox}, but this is not supported in Silverlight. I've considered going through the theme and setting a key for every style and then using BasedOn to create both an implicit style to use with the ImplictStyleManager, as well as another explicit style for use with the existing styled controls. Have you got any better ideas?

    Read the article

  • How to enable HTTP response caching in Spring Boot

    - by Samuli Kärkkäinen
    I have implemented a REST server using Spring Boot 1.0.2. I'm having trouble preventing Spring from setting HTTP headers that disable HTTP caching. My controller is as following: @Controller public class MyRestController { @RequestMapping(value = "/someUrl", method = RequestMethod.GET) public @ResponseBody ResponseEntity<String> myMethod( HttpServletResponse httpResponse) throws SQLException { return new ResponseEntity<String>("{}", HttpStatus.OK); } } All HTTP responses contain the following headers: Cache-Control: no-cache, no-store, max-age=0, must-revalidate Expires: 0 Pragma: no-cache I've tried the following to remove or change those headers: Call setCacheSeconds(-1) in the controller. Call httpResponse.setHeader("Cache-Control", "max-age=123") in the controller. Define @Bean that returns WebContentInterceptor for which I've called setCacheSeconds(-1). Set property spring.resources.cache-period to -1 or a positive value in application.properties. None of the above have had any effect. How do I disable or change these headers for all or individual requests in Spring Boot?

    Read the article

  • QT's QGraphicsview clickable icon on screen

    - by goodwince
    I'm working on a project with QT and am trying to draw icons from a database. I have auxiliary information in the table that I would like to display if the user chooses to see it (i.e. the x,y of the icon and some other options from database). I am debating on would it be better to go through and just redraw all the icons with this information added, or do some sort of looping through the icons and setting some value to true to display the information. Thanks in advance.

    Read the article

  • android: How to set a listener that fires when my ViewFlipper shows a new child

    - by Joseph Cheek
    Hi Android gurus, I have a ViewFlipper for which I want a listener to fire when the child displayed is changed. I have set an OnFocusChangeListener to the ViewFlipper but it never fires when I flip from child 0 to child 1 or vice-versa. The ViewFlipper contains two RelativeLayouts and I have tried setting OnFocusChangeListeners for those but I get a ClassCastException when I try to set it. Here's my code: RelativeLayout songsLayout = (RelativeLayout) findViewById(R.id.song_page_layout); songsLayout.setOnFocusChangeListener(new OnFocusChangeListener() { public void onFocusChange(View view, boolean hasFocus) { showPopUp("View " + view.getId() + " now has focus: " + hasFocus); } } ); R.is.song_page_layout is one of my RelativeLayouts and showPopUp() is a function I use to show, well, popups. Does anyone have working code for some sort of trigger that fires when a ViewFlipper changes which child is displayed?

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • jQuery table drag and drop plugin within iFrame

    - by Bala
    I am using the latest version (0.5) of Table Drag and Drop plugin (http://www.isocra.com/2008/02/table-drag-and-drop-jquery-plugin/) for jQuery. I have a problem when the table with the draggable rows is inside an iframe. When I drag a row and take it to the top, the page will not scroll (even after explicitly setting scrollAmount to a positive value). Scrolling works on the same table if it is not inside an iframe. Has anyone faced this problem? Has anyone figured out a solution for this?

    Read the article

  • Vista/7 UAC: how to lower process privileges

    - by Sergey
    Hi all, Is it possible for a process to lower itself from elevated UAC permission back to standard user? If not can the elevated process launch its copy with standard user token and then kill itself? Any code examples (C# preferred)? Details: Problem: - user installs my product (written in C#) - the installer elevates its UAC permission to admin - at the end the installer launches my exe - the exe inherits elevated permissions from admin - the exe mounts network drives which become invisible in Windows Explorer (that runs with regular permissions) Options I considered: 1) break installer into outer exe and inner exe, that runs with elevated permission. The install consists of 1000+ lines of NSIS code and I don't know anything about NSIS 2) mounting drives with lower permissions. If I do it Win Explorer can see the drives but my exe cannot 3) setting EnableLinkedConnection registry option to 1. This is a no-go because it requires PC reboot during the installation. Please help! Sergey

    Read the article

  • iPhone Provisioning: What's it all about?

    - by david van brink
    Grepping around, I see that I'm not AT ALL alone in being... challenged... by the process of setting up an iPhone app, getting it to run, giving it my testers, and so on. I've gotten it to work. Somehow I emailed a copy or two to testers, and eventually got my li'l app into the store, and that was fine. But I can't say a really, deeply understand it! (And I don't do iDev every day. Even now my recollection of what I did is kind-of hazy.) I'm moderately capable of understanding things, if presented, well, you know, in a way I can understand. Can anyone point me to a crystal clear explanation of what provisioning actually is? I feel that if I understood it, the recipes to do it would be obvious. Thanks!

    Read the article

  • SQL Server 2005 sp_send_dbmail

    - by Jit
    Hi Friends, When we use sp_send_dbmail to send email with attachment, the attachment gets copied into a folder inside C:\Windows\Temp. As we have many emails to be sent every day, the temp folder grows rapidly. This is the case with SQL Server 2005. We noticed that, with SQL Server 2008, we dont see these file under temp folder. Is there any setting to turn the above behavior off? Does SQL Server 2008 store the files in any other folder and not in temp? Appreciate your help and time. Thanks.

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >