Search Results

Search found 13012 results on 521 pages for 'action caching'.

Page 150/521 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Loosely Coupled Tabs in Java Editor

    - by Geertjan
    One of the NetBeans Platform 7.1 API enhancements is the @MultiViewElement.Registration annotation. That lets you add a new tab to any existing NetBeans editor. Really powerful since I didn't need to change the sources (or even look at the sources) of the Java editor to add the "Visualizer" tab to it, as shown below: Right now, the tab doesn't show anything, that will come in the next blog entry. The point here is to show how to set things up so that you have a new tab in the Java editor, without needing to touch any of the NetBeans IDE sources: And here's the code, take note of the annotation, which registers the JPanel for the "text/x-java" MIME type: import javax.swing.Action; import javax.swing.JComponent; import javax.swing.JPanel; import javax.swing.JToolBar; import org.netbeans.core.spi.multiview.CloseOperationState; import org.netbeans.core.spi.multiview.MultiViewElement; import org.netbeans.core.spi.multiview.MultiViewElementCallback; import org.openide.awt.UndoRedo; import org.openide.loaders.DataObject; import org.openide.util.Lookup; import org.openide.util.NbBundle; import org.openide.windows.TopComponent; @MultiViewElement.Registration(displayName = "#LBL_Visualizer", iconBase = "org/java/vis/icon.gif", mimeType = "text/x-java", persistenceType = TopComponent.PERSISTENCE_NEVER, preferredID = "JavaVisualizer", position = 3000) @NbBundle.Messages({     "LBL_Visualizer=Visualizer" }) public class JavaVisualizer extends JPanel implements MultiViewElement {     private JToolBar toolbar = new JToolBar();     private DataObject obj;     private MultiViewElementCallback mvec;     public JavaVisualizer(Lookup lkp) {         obj = lkp.lookup(DataObject.class);         assert obj != null;     }     @Override     public JComponent getVisualRepresentation() {         return this;     }     @Override     public JComponent getToolbarRepresentation() {         return toolbar;     }     @Override     public Action[] getActions() {         return new Action[0];     }     @Override     public Lookup getLookup() {         return obj.getLookup();     }     @Override     public void componentOpened() {     }     @Override     public void componentClosed() {     }     @Override     public void componentShowing() {     }     @Override     public void componentHidden() {     }     @Override     public void componentActivated() {     }     @Override     public void componentDeactivated() {     }     @Override     public UndoRedo getUndoRedo() {         return UndoRedo.NONE;     }     @Override     public void setMultiViewCallback(MultiViewElementCallback mvec) {         this.mvec = mvec;     }     @Override     public CloseOperationState canCloseElement() {         return CloseOperationState.STATE_OK;     } } It's a fair amount of code, but mostly pretty self-explanatory. The loosely coupled tabs are applicable to all NetBeans editors, not just the Java editor, which is why the "History" tab is now available to all editors throughout NetBeans IDE. In the next blog entry, you'll see the integration of the Visual Library into the panel I embedded in the Java editor.

    Read the article

  • Creating ADF Faces Comamnd Button at Runtime

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In ADF Faces, the command button is an instance of RichCommandButton and can be created at runtime. While creating the button is not difficult at all, adding behavior to it requires knowing about how to dynamically create and add an action listener reference. The example code below shows two methods: The first method, handleButtonPress is a public method exposed on a managed bean. public void handleButtonPress(ActionEvent event){   System.out.println("Event handled");   //optional: partially refresh changed components if command   //issued as a partial submit } The second method is called in response to a user interaction or on page load and dynamically creates and adds a command button. When the button is pressed, the managed bean method – the action handler – defined above is called. The action handler is referenced using EL in the created MethodExpression instance. If the managed bean is in viewScope, backingBeanScope or pageFlowsScope, then you need to add these scopes as a prefix to the EL (as you would when configuring the managed bean reference at design time) //Create command button and add it as a child to the parent component that is passed as an //argument to this method private void reateCommandButton(UIComponent parent){   RichCommandButton edit = new RichCommandButton();   //make the request partial   edit.setPartialSubmit(true);   edit.setText("Edit");                             //compose the method expression to invoke the event handler   FacesContext fctx = FacesContext.getCurrentInstance();   Application application = fctx.getApplication();   ExpressionFactory elFactory = application.getExpressionFactory();   ELContext elContext = facesCtx.getELContext();   MethodExpression methodExpressio = null;   //Make sure the EL expression references a valid managed bean method. Ensure   //the bean scope is properly addressed    methodExpression =  elFactory.createMethodExpression(                              elContext,"#{myRequestScopeBean.handleButtonPress}",                             Object.class,new Class[] {ActionEvent.class});   //Create the command buttonaction listener reference   MethodExpressionActionListener al = null;          al= new MethodExpressionActionListener(methodExpression);    edit.addActionListener(al);     //add new command button to parent component and PPR the component for     //the button to show    parent.getChildren().add(edit);    AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(parent);  }

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Think before you animate

    - by David Paquette
    Animations are becoming more and more common in our applications.  With technologies like WPF, Silverlight and jQuery, animations are becoming easier for developers to use (and abuse).  When used properly, animation can augment the user experience.  When used improperly, animation can degrade the user experience.  Sometimes, the differences can be very subtle. I have recently made use of animations in a few projects and I very quickly realized how easy it is to abuse animation techniques.  Here are a few things I have learned along the way. 1) Don’t animate for the sake of animating We’ve all seen the PowerPoint slides with annoying slide transitions that animate 20 different ways.  It’s distracting and tacky.  The same holds true for your application.  While animations are fun and becoming easy to implement, resist the urge to use the technology just because you think the technology is amazing.   2) Animations should (and do) have meaning I recently built a simple Windows Phone 7 (WP7) application, Steeped (download it here).  The application has 2 pages.  The first page lists a number of tea types.  When the user taps on one of the tea types, the application navigates to the second page with information about that tea type and some options for the user to choose from.       One of the last things I did before submitting Steeped to the marketplace was add a page transition between the 2 pages.  I choose the Slide / Fade Out transition.  When the user selects a tea type, the main page slides to the left and fades out.  At the same time, the details page slides in from the right and fades in.  I tested it and thought it looked great so I submitted the app.  A few days later, I asked a friend to try the app.  He selected a tea type, and I was a little surprised by how he used the app.  When he wanted to navigate back to the main page, instead of pressing the back button on the phone, he tried to use a swiping gesture.  Of course, the swiping gesture did nothing because I had not implemented that feature.  After thinking about it for a while, I realized that the page transition I had chosen implied a particular behaviour.  As a user, if an action I perform causes an item (in this case the page) to move, then my expectation is that I should be able to move it back.  I have since added logic to handle the swipe gesture and I think the app flows much better now. When using animation, it pays to ask yourself:  What story does this animation tell my users?   3) Watch the replay Some animations might seem great initially but can get annoying over time.  When you use an animation in your application, make sure you try using it over and over again to make sure it doesn’t get annoying.  When I add an animation, I try watch it at least 25 times in a row.  After watching the animation repeatedly, I can make a more informed decision whether or not I should keep the animation.  Often, I end up shortening the length of the animations.   4) Don’t get in the users way An animation should never slow the user down.  When implemented properly, an animation can give a perceived bump in performance.  A good example of this is a the page transitions in most of the built in apps on WP7.  Obviously, these page animations don’t make the phone any faster, but they do provide a more responsive user experience.  Why?  Because most of the animations begin as soon as the user has performed some action.  The destination page might not be fully loaded yet, but the system responded immediately to user action, giving the impression that the system is more responsive.  If the user did not see anything happen until after the destination page was fully loaded, the application would feel clumsy and slow.  Also, it is important to make sure the animation does not degrade the performance (or perceived performance) of the application.   Jut a few things to consider when using animations.  As is the case with many technologies, we often learn how to misuse it before we learn how to use it effectively.

    Read the article

  • Synchronized Property Changes (Part 4)

    - by Geertjan
    The next step is to activate the undo/redo functionality... for a Node. Something I've not seen done before. I.e., when the Node is renamed via F2 on the Node, the "Undo/Redo" buttons should start working. Here is the start of the solution, via this item in the mailing list and Timon Veenstra's BeanNode class, note especially the items in bold: public class ShipNode extends BeanNode implements PropertyChangeListener, UndoRedo.Provider { private final InstanceContent ic; private final ShipSaveCapability saveCookie; private UndoRedo.Manager manager; private String oldDisplayName; private String newDisplayName; private Ship ship; public ShipNode(Ship bean) throws IntrospectionException { this(bean, new InstanceContent()); } private ShipNode(Ship bean, InstanceContent ic) throws IntrospectionException { super(bean, Children.LEAF, new ProxyLookup(new AbstractLookup(ic), Lookups.singleton(bean))); this.ic = ic; setDisplayName(bean.getType()); setShortDescription(String.valueOf(bean.getYear())); saveCookie = new ShipSaveCapability(bean); bean.addPropertyChangeListener(WeakListeners.propertyChange(this, bean)); } @Override public Action[] getActions(boolean context) { List<? extends Action> shipActions = Utilities.actionsForPath("Actions/Ship"); return shipActions.toArray(new Action[shipActions.size()]); } protected void fire(boolean modified) { if (modified) { ic.add(saveCookie); } else { ic.remove(saveCookie); } } @Override public UndoRedo getUndoRedo() { manager = Lookup.getDefault().lookup( UndoRedo.Manager.class); return manager; } private class ShipSaveCapability implements SaveCookie { private final Ship bean; public ShipSaveCapability(Ship bean) { this.bean = bean; } @Override public void save() throws IOException { StatusDisplayer.getDefault().setStatusText("Saving..."); fire(false); } } @Override public boolean canRename() { return true; } @Override public void setName(String newDisplayName) { Ship c = getLookup().lookup(Ship.class); oldDisplayName = c.getType(); c.setType(newDisplayName); fireNameChange(oldDisplayName, newDisplayName); fire(true); fireUndoableEvent("type", ship, oldDisplayName, newDisplayName); } public void fireUndoableEvent(String property, Ship source, Object oldValue, Object newValue) { ReUndoableEdit reUndoableEdit = new ReUndoableEdit( property, source, oldValue, newValue); UndoableEditEvent undoableEditEvent = new UndoableEditEvent( this, reUndoableEdit); manager.undoableEditHappened(undoableEditEvent); } private class ReUndoableEdit extends AbstractUndoableEdit { private Object oldValue; private Object newValue; private Ship source; private String property; public ReUndoableEdit(String property, Ship source, Object oldValue, Object newValue) { super(); this.oldValue = oldValue; this.newValue = newValue; this.source = source; this.property = property; } @Override public void undo() throws CannotUndoException { setName(oldValue.toString()); } @Override public void redo() throws CannotRedoException { setName(newValue.toString()); } } @Override public String getDisplayName() { Ship c = getLookup().lookup(Ship.class); if (null != c.getType()) { return c.getType(); } return super.getDisplayName(); } @Override public String getShortDescription() { Ship c = getLookup().lookup(Ship.class); if (null != String.valueOf(c.getYear())) { return String.valueOf(c.getYear()); } return super.getShortDescription(); } @Override public void propertyChange(PropertyChangeEvent evt) { if (evt.getPropertyName().equals("type")) { String oldDisplayName = evt.getOldValue().toString(); String newDisplayName = evt.getNewValue().toString(); fireDisplayNameChange(oldDisplayName, newDisplayName); } else if (evt.getPropertyName().equals("year")) { String oldToolTip = evt.getOldValue().toString(); String newToolTip = evt.getNewValue().toString(); fireShortDescriptionChange(oldToolTip, newToolTip); } fire(true); } } Undo works when rename is done, but Redo never does, because Undo is constantly activated, since it is reactivated whenever there is a name change. And why must the UndoRedoManager be retrieved from the Lookup (it doesn't work otherwise)? Don't get that part of the code either. Help welcome!

    Read the article

  • GCM: onMessage() from GCMIntentService is never called [migrated]

    - by Shrikant
    I am implementing GCM (Google Cloud Messaging- PUSH Notifications) in my application. I have followed all the steps given in GCM tutorial from developer.android.com My application's build target is pointing to Goolge API 8 (Android 2.2 version). I am able to get the register ID from GCM successfully, and I am passing this ID to my application server. So the registration step is performed successfully. Now when my application server sends a PUSH message to my device, the server gets the message as SUCCESS=1 FAILURE=0, etc., i.e. Server is sending message successfully, but my device never receives the message. After searching alot about this, I came to know that GCM pushes messages on port number 5228, 5229 or 5230. Initially, my device and laptop was restricted for some websites, but then I was granted all the permissions to access all websites, so I guess these port numbers are open for my device. So my question is: I never receive any PUSH message from GCM. My onMessage() from GCMIntenService class is never called. What could be the reason? Please see my following code and guide me accordingly: I have declared following in my manifest: <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="8" /> <permission android:name="package.permission.C2D_MESSAGE" android:protectionLevel="signature" /> <!-- App receives GCM messages. --> <uses-permission android:name="com.google.android.c2dm.permission.RECEIVE" /> <!-- GCM connects to Google Services. --> <uses-permission android:name="android.permission.INTERNET" /> <!-- GCM requires a Google account. --> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <!-- Keeps the processor from sleeping when a message is received. --> <uses-permission android:name="android.permission.WAKE_LOCK" /> <uses-permission android:name="package.permission.C2D_MESSAGE" /> <uses-permission android:name="android.permission.INTERNET" /> <receiver android:name="com.google.android.gcm.GCMBroadcastReceiver" android:permission="com.google.android.c2dm.permission.SEND" > <intent-filter> <action android:name="com.google.android.c2dm.intent.RECEIVE" /> <action android:name="com.google.android.c2dm.intent.REGISTRATION" /> <category android:name="packageName" /> </intent-filter> </receiver> <receiver android:name=".ReceiveBroadcast" android:exported="false" > <intent-filter> <action android:name="GCM_RECEIVED_ACTION" /> </intent-filter> </receiver> <service android:name=".GCMIntentService" /> /** * @author Shrikant. * */ public class GCMIntentService extends GCMBaseIntentService { /** * The Sender ID used for GCM. */ public static final String SENDER_ID = "myProjectID"; /** * This field is used to call Web-Service for GCM. */ SendUserCredentialsGCM sendUserCredentialsGCM = null; public GCMIntentService() { super(SENDER_ID); sendUserCredentialsGCM = new SendUserCredentialsGCM(); } @Override protected void onRegistered(Context arg0, String registrationId) { Log.i(TAG, "Device registered: regId = " + registrationId); sendUserCredentialsGCM.sendRegistrationID(registrationId); } @Override protected void onUnregistered(Context context, String arg1) { Log.i(TAG, "unregistered = " + arg1); sendUserCredentialsGCM .unregisterFromGCM(LoginActivity.API_OR_BROWSER_KEY); } @Override protected void onMessage(Context context, Intent intent) { Log.e("GCM MESSAGE", "Message Recieved!!!"); String message = intent.getStringExtra("message"); if (message == null) { Log.e("NULL MESSAGE", "Message Not Recieved!!!"); } else { Log.i(TAG, "new message= " + message); sendGCMIntent(context, message); } } private void sendGCMIntent(Context context, String message) { Intent broadcastIntent = new Intent(); broadcastIntent.setAction("GCM_RECEIVED_ACTION"); broadcastIntent.putExtra("gcm", message); context.sendBroadcast(broadcastIntent); } @Override protected void onError(Context context, String errorId) { Log.e(TAG, "Received error: " + errorId); Toast.makeText(context, "PUSH Notification failed.", Toast.LENGTH_LONG) .show(); } @Override protected boolean onRecoverableError(Context context, String errorId) { return super.onRecoverableError(context, errorId); } }

    Read the article

  • AppFabric Cache - An existing connection was forcibly closed by the remote host

    - by Wallace Breza
    I'm trying to get AppFabric cache up and running on my local development environment. I have Windows Server AppFabric Beta 2 Refresh installed, and the cache cluster and host configured and started running on Windows 7 64-bit. I'm running my MVC2 website in a local IIS website under a v4.0 app pool in integrated mode. HostName : CachePort Service Name Service Status Version Info -------------------- ------------ -------------- ------------ SN-3TQHQL1:22233 AppFabricCachingService UP 1 [1,1][1,1] I have my web.config configured with the following: <configSections> <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" allowLocation="true" allowDefinition="Everywhere"/> </configSections> <dataCacheClient> <hosts> <host name="SN-3TQHQL1" cachePort="22233" /> </hosts> </dataCacheClient> I'm getting an error when I attempt to initialize the DataCacheFactory: protected CacheService() { _cacheFactory = new DataCacheFactory(); <-- Error here _defaultCache = _cacheFactory.GetDefaultCache(); } I'm getting the ASP.NET yellow error screen with the following: An existing connection was forcibly closed by the remote host Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Source Error: Line 21: protected CacheService() Line 22: { Line 23: _cacheFactory = new DataCacheFactory(); Line 24: _defaultCache = _cacheFactory.GetDefaultCache(); Line 25: }

    Read the article

  • Where does lucene .net cache the search results?

    - by Lanceomagnifico
    Hi, I'm trying to figure out where Lucene stores the cached query results, and how it's configured to do so - and how long it caches for. This is for an ASP.NET 3.5 solution. I'm getting this problem: If I run a search and sort the result by a particular product field, it seems to work the very first time each search and sort combination is used. If I then go in and change some product attributes, reindex and run the same search and sort, I get the products returned in the same order as the very first result. example Product A is named: foo Product B is named: bar For the first search, sort by name desc. This results in: Product A Product B Now mix up the data a bit: Change names to: Product A named: bar Product B named: foo reindex verify that the index contains the changes for these two products. search Result: Product A Product B Since I changed the alphabetical order of the names, I expected: Product B Product A So I think that Lucene is caching the search results. (Which, btw, is a very good thing.) I just need to know where/how to clear these results. I've tried deleting the index files and doing an IISreset to clear the memory, but it seems to have no effect. So I'm thinking there is another set of Lucene files outside of the indexes that Lucene uses for caching. EDIT I just found out that you must create the index for field you wish to sort on as un-tokenized. I had the field as tokenized, so sorting didn't work.

    Read the article

  • Terrible DotNetNuke performance

    - by Peter Bridger
    I'm involved with a project using DotNetNuke version 05.01.04 Community Edition. We are building our new Intranet using it, but performance is terrible. We have five people adding pages and content to it and every 15-30 seconds they experience a pause of 10 seconds or longer before the system continues and the next screens loads. The server is Windows 2003, 3.8GHz with 1GB of RAM. I'm told by our server admin that the CPU and memory performance don't appear to be the bottleneck. We currently have 350 pages in the system, we a plan to add 1000. So we need to resolve this performance problem so that we can enter content and so we can go live. I just can't see where the bottleneck is. Is there a good why to determine the bottleneck when using DotNetNuke? Modules installed Publish:Engage (Not currently in use) Page Blaster (Doesn't appear to providing caching when users logged in using Integrated Authentication) SimpleGallery XMod Content Manager IIS Setup Application recycling completely disabled (Apart from a 2am recycle) New findings: 18th March 2010 The main bottleneck was due to version 5.1.4 having a bug which caused 1300 database roundtrips on an average page, due to broken database in-memory caching. We've upgraded to 5.2.4 which has resolved this bottleneck. Now the next biggest bottleneck is the navigation. We've used both DDR:Menu and DDN:Nav, but both have a major impact on performance. Is there a navigation interface out there that doesn't drain performance so badly?

    Read the article

  • SQL Cache Dependency not working with Stored Procedure

    - by pjacko
    Hello, I can't get SqlCacheDependency to work with a simple stored proc (SQL Server 2008): create proc dbo.spGetPeteTest as set ANSI_NULLS ON set ANSI_PADDING ON set ANSI_WARNINGS ON set CONCAT_NULL_YIELDS_NULL ON set QUOTED_IDENTIFIER ON set NUMERIC_ROUNDABORT OFF set ARITHABORT ON select Id, Artist, Album from dbo.PeteTest And here's my ASP.NET code (3.5 framework): -- global.asax protected void Application_Start(object sender, EventArgs e) { string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["MyConn"].ConnectionString; System.Data.SqlClient.SqlDependency.Start(connectionString); } -- Code-Behind private DataTable GetAlbums() { string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["UnigoConnection"].ConnectionString; DataTable dtAlbums = new DataTable(); using (SqlConnection connection = new SqlConnection(connectionString)) { // Works using select statement, but NOT SP with same text //SqlCommand command = new SqlCommand( // "select Id, Artist, Album from dbo.PeteTest", connection); SqlCommand command = new SqlCommand(); command.Connection = connection; command.CommandType = CommandType.StoredProcedure; command.CommandText = "dbo.spGetPeteTest"; System.Web.Caching.SqlCacheDependency new_dependency = new System.Web.Caching.SqlCacheDependency(command); SqlDataAdapter DA1 = new SqlDataAdapter(); DA1.SelectCommand = command; DataSet DS1 = new DataSet(); DA1.Fill(DS1); dtAlbums = DS1.Tables[0]; Cache.Insert("Albums", dtAlbums, new_dependency); } return dtAlbums; } Anyone have any luck with getting this to work with SPs? Thanks!

    Read the article

  • HashMap key problems

    - by Peterdk
    I'm profiling some old java code and it appears that my caching of values using a static HashMap and a access method does not work. Caching code (a bit abstracted): static HashMap<Key, Value> cache = new HashMap<Key, Value>(); public static Value getValue(Key key){ System.out.println("cache size="+ cache.size()); if (cache.containsKey(key)) { System.out.println("cache hit"); return cache.get(key); } else { System.out.println("no cache hit"); Value value = calcValue(); cache.put(key, value); return value; } } Profiling code: for (int i = 0; i < 100; i++) { getValue(new Key()); } Result output: cache size=0 no cache hit (..) cache size=99 no cache hit It looked like a standard error in Key's hashing code or equals code. However: new Key().hashcode == new Key().hashcode // TRUE new Key().equals(new Key()) // TRUE What's especially weird is that cache.put(key, value) just adds another value to the hashmap, instead of replacing the current one. So, I don't really get what's going on here. Am I doing something wrong?

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • Set the property hibernate.dialect error message

    - by user281180
    I am having the following error when configuring mvc3 and Nhibernate. Can anyone guide me what I have missed please. the dialect was not set. Set the property hibernate.dialect. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: NHibernate.HibernateException: The dialect was not set. Set the property hibernate.dialect. Source Error: Line 16: { Line 17: NHibernate.Cfg.Configuration configuration = new NHibernate.Cfg.Configuration(); Line 18: configuration.AddAssembly(System.Reflection.Assembly.GetExecutingAssembly()); Line 19: sessionFactory = configuration.BuildSessionFactory(); Line 20: } My web.config is as follows: <configSections> <section name="cachingConfiguration"type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings,Microsoft.Practices.EnterpriseLibrary.Caching"/> <section name="log4net"type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> <section name="hibernate-configuration"type="NHibernate.Cfg.ConfigurationSectionHandler, NHibernate"/ <appSettings> <add key="BusinessObjectAssemblies" value="Keeper.API"></add> <add key="ConnectionString" value="Server=localhost\SQLSERVER2005;Database=KeeperDev;User=test;Pwd=test;"></add> <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/> </appSettings> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="dialect">NHibernate.Dialect.MsSql2000Dialect</property> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.connection_string">Server=localhost\SQLServer2005;Database=KeeperDev;User=test;Pwd=test;</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> </session-factory> </hibernate-configuration>

    Read the article

  • Is it safe to reuse javax.xml.ws.Service objects

    - by Noel Ang
    I have JAX-WS style web service client that was auto-generated with the NetBeans IDE. The generated proxy factory (extends javax.xml.ws.Service) delegates proxy creation to the various Service.getPort methods. The application that I am maintaining instantiates the factory and obtains a proxy each time it calls the targetted service. Creating the new proxy factory instances repeatedly has been shown to be expensive, given that the WSDL documentation supplied to the factory constructor, an HTTP URI, is re-retrieved for each instantiation. We had success in improving the performance by caching the WSDL. But this has ugly maintenance and packaging implications for us. I would like to explore the suitability of caching the proxy factory itself. Is it safe, e.g., can two different client classes, executing on the same JVM and targetting the same web service, safely use the same factory to obtain distinct proxy objects (or a shared, reentrant one)? I've been unable to find guidance from either the JAX-WS specification nor the javax.xml.ws API documentation. The factory-proxy multiplicity is unclear to me. Having Service.getPort rather than Service.createPort does not inspire confidence.

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • What is a data structure for quickly finding non-empty intersections of a list of sets?

    - by Andrey Fedorov
    I have a set of N items, which are sets of integers, let's assume it's ordered and call it I[1..N]. Given a candidate set, I need to find the subset of I which have non-empty intersections with the candidate. So, for example, if: I = [{1,2}, {2,3}, {4,5}] I'm looking to define valid_items(items, candidate), such that: valid_items(I, {1}) == {1} valid_items(I, {2}) == {1, 2} valid_items(I, {3,4}) == {2, 3} I'm trying to optimize for one given set I and a variable candidate sets. Currently I am doing this by caching items_containing[n] = {the sets which contain n}. In the above example, that would be: items_containing = [{}, {1}, {1,2}, {2}, {3}, {3}] That is, 0 is contained in no items, 1 is contained in item 1, 2 is contained in itmes 1 and 2, 2 is contained in item 2, 3 is contained in item 2, and 4 and 5 are contained in item 3. That way, I can define valid_items(I, candidate) = union(items_containing[n] for n in candidate). Is there any more efficient data structure (of a reasonable size) for caching the result of this union? The obvious example of space 2^N is not acceptable, but N or N*log(N) would be.

    Read the article

  • Java: library that does nice formatted log outputs

    - by WizardOfOdds
    I cannot find back a library that allowed to format log output statements in a much nicer way than what is usually seen. One of the feature I remember is that it could 'offset' the log message depending on the 'nestedness' of where the log statement was occuring. That is, instead of this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java| 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java| 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 459 | Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java| 789 | Another debug message It would shows something like this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java | 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java | 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 459 | |__ Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java | 789 | |__ Another debug message This is an example I just made up (VeryLongCamelCaseClassNamesNotMine). But I remember seeing such cleanly formatted log output and they were really much nicer than anything I had seen before and, in addition to being just plain nicer, they were also easier to read for they reproduced some of the logical organization of the code. Yet I cannot find anymore what that library was. I'm pretty sure it was fully compatible with log4j or sl4j.

    Read the article

  • No expires header

    - by Tom Gullen
    I have the report from YSlow: (no expires) http://static3.scirra.net/avatars/128/40cfdcbd1b1ec1842e199c97c4b85a4a.png (And a lot more similar). In my web.config though, I have: <system.webServer> <staticContent> <clientCache httpExpires="Sun, 29 Mar 2020 00:00:00 GMT" cacheControlMode="UseExpires" /> </staticContent> <caching> <profiles> <add extension=".ashx" policy="CacheForTimePeriod" kernelCachePolicy="DontCache" duration="01:00:00" /> <add extension=".png" policy="CacheUntilChange" kernelCachePolicy="CacheUntilChange" location="Any" /> </profiles> </caching> <rewrite> <rules> <rule name="Avatar"> <match url="avatars/([0-9]+)/(.*).png" /> <action type="Rewrite" url="gravatar.ashx?hash={R:2}&amp;size={R:1}" appendQueryString="false" /> </rule> </rules> </rewrite> Should this not be adding the expires header correctly? My objectives are: Gravatar.ashx fetches image from Gravatar server Server caches result for 1 hour (similar to SO) Expires header is added so client doesn't keep fetching it from my server

    Read the article

  • Section or group name 'cachingConfiguration' is already defined - but where?

    - by Richard Ev
    On Windows XP I am working on a .NET 3.5 web app that's a combination of WebForms and MVC2 (The WebForms parts are legacy, and being migrated to MVC). When I run this from VS2008 using the ASP.NET web server everything works as expected. However, when I host the app in IIS and try to use it, I see the following error Section or group name 'cachingConfiguration' is already defined. Updates to this may only occur at the configuration level where it is defined. Source Error: Line 24: </sectionGroup> Line 25: <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/> Line 26: <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings,Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> Line 27: </configSections> Line 28: Sure enough, if I remove the offending line (line 26 in the error message) from my web.config then the app runs correctly. However, I really need to find out where the duplicate definition of this is. It's nowhere in my solution. Where else could it be?

    Read the article

  • Why does Tex/Latex not speed up in subsequent runs?

    - by Debilski
    I really wonder, why even recent systems of Tex/Latex do not use any caching to speed up later runs. Every time that I fix a single comma*, calling Latex costs me about the same amount of time, because it needs to load and convert every single picture file. (* I know that even changing a tiny comma could affect the whole structure but of course, a well-written cache format could see the impact of that. Also, there might be situations where 100% correctness is not needed as long as it’s fast.) Is there something in the language of Tex which makes this complicated or impossible to accomplish or is it just that in the original implementation of Tex, there was no need for this (because it would have been slow anyway on those large computers)? But then on the other hand, why doesn’t this annoy other people so much that they’ve started a fork which has some sort of caching (or transparent conversion of Tex files to a format which is faster to parse)? Is there anything I can do to speed up subsequent runs of Latex? Except from putting all the stuff into chapterXX.tex files and then commenting them out?

    Read the article

  • POST XML to server, receive PDF

    - by Shaggy Frog
    Similar to this question, we are developing a Web app where the client clicks a button to receive a PDF from the server. Right now we're using the .ajax() method with jQuery to POST the data the backend needs to generate the PDF (we're sending XML) when the button is pressed, and then the backend is generating the PDF entirely in-memory and sending it back as application/pdf in the HTTP response. One answer to that question requires the server-side save the PDF to disk so it can give back a URL for the client to GET. But I don't want the backend caching content at all. The other answer suggests the use of a jQuery plugin, but when you look at its code, it's actually generating a form element and then submitting the form. That method will not work for us since we are sending XML data in the body of the HTTP request. Is there a way to have the browser open up the PDF without caching the PDF server-side, and without requiring us to throw out our send-data-to-the-server-using-XML solution? (I'd like the browser to behave like it does when a form element is submitted -- a POST is made and then the browser looks at the Content-type header to determine what to do next, like load the PDF in the browser window, a la Safari)

    Read the article

  • Why isn't the Cache invalidated after table update using the SqlCacheDependency?

    - by Jason
    I have been trying to get SqlCacheDependency working. I think I have everything set up correctly, but when I update the table, the item in the Cache isn't invalidated. Can you look at my code and see if I am missing anything? I enabled the Service Broker for the Sandbox database. I have placed the following code in the Global.asax file. I also restart IIS to make sure it is called. void Application_Start(object sender, EventArgs e) { SqlDependency.Start(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString); } I have placed this entry in the web.config file: <system.web> <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="Sandbox" connectionStringName="SandboxConnectionString"/> </databases> </sqlCacheDependency> </caching> </system.web> I call this code to put the item into the cache: protected void CacheDataSetButton_Click(object sender, EventArgs e) { using (SqlConnection sqlConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand("SELECT PetID, Name, Breed, Age, Sex, Fixed, Microchipped FROM dbo.Pets", sqlConnection)) { using (SqlDataAdapter sqlDataAdapter = new SqlDataAdapter(sqlCommand)) { DataSet petsDataSet = new DataSet(); sqlDataAdapter.Fill(petsDataSet, "Pets"); SqlCacheDependency petsSqlCacheDependency = new SqlCacheDependency(sqlCommand); Cache.Insert("Pets", petsDataSet, petsSqlCacheDependency, DateTime.Now.AddSeconds(10), Cache.NoSlidingExpiration); } } } } Then I bind the GridView with this code: protected void BindGridViewButton_Click(object sender, EventArgs e) { if (Cache["Pets"] != null) { GridView1.DataSource = Cache["Pets"] as DataSet; GridView1.DataBind(); } } Between attempts to DataBind the GridView, I change the table's values expecting it to invalidate the Cache["Pets"] item, but it seems to stay in the Cache indefinitely.

    Read the article

  • Rate Limit Calls To Api Using Cache

    - by namtax
    Hi I am using coldfusion to call the last.fm api, using a cfc bundle sourced from here I am concerned about going over the request limit, which is 5 requests per originating IP address per second, averaged over a 5 minute period. The cfc bundle has a central component which calls all the other components, which are split up into sections like "artist", "track" etc...This central component "lastFmApi.cfc." is initiated in my application, and persisted for the lifespan of the application // Application.cfc example <cffunction name="onApplicationStart"> <cfset var apiKey = '[your api key here]' /> <cfset var apiSecret = '[your api secret here]' /> <cfset application.lastFm = CreateObject('component', 'org.FrankFusion.lastFm.lastFmApi').init(apiKey, apiSecret) /> </cffunction> Now if I want to call the api through a handler/controller, for example my artist handler...I can do this <cffunction name="artistPage" cache="5 mins"> <cfset qAlbums = application.lastFm.user.getArtist(url.artistName) /> </cffunction> I am a bit confused towards caching, but am caching each call to the api in this handler for 5 mins, but does this make any difference, because each time someone hits a new artist page wont this still count as a fresh hit against the api? Wondering how best to tackle this Thanks

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >