Search Results

Search found 7513 results on 301 pages for 'actual'.

Page 115/301 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Updating password hashing without forcing a new password for existing users

    - by Willem
    You maintain an existing application with an established user base. Over time it is decided that the current password hashing technique is outdated and needs to be upgraded. Furthermore, for UX reasons, you don't want existing users to be forced to update their password. The whole password hashing update needs to happen behind the screen. Assume a 'simplistic' database model for users that contains: ID Email Password How does one go around to solving such a requirement? My current thoughts are: create a new hashing method in the appropriate class update the user table in the database to hold an additional password field Once a user successfully logs in using the outdated password hash, fill the second password field with the updated hash This leaves me with the problem that I cannot reasonable differentiate between users who have and those who have not updated their password hash and thus will be forced to check both. This seems horribly flawed. Furthermore this basically means that the old hashing technique could be forced to stay indefinitely until every single user has updated their password. Only at that moment could I start removing the old hashing check and remove the superfluous database field. I'm mainly looking for some design tips here, since my current 'solution' is dirty, incomplete and what not, but if actual code is required to describe a possible solution, feel free to use any language.

    Read the article

  • What is Mark Shuttleworth's "Easter eggs" blog a reference to? [closed]

    - by fluteflute
    I saw this blog post today, and I was wondering if there's some meaning within the Ubuntu community that I've missed? (I know what an easter egg is in a computer context.) One of our ducks has started dropping eggs in random locations in the garden. I don’t know which duck, but I assume it’s one of the new females we took in from the SPCA, who hasn’t figured out “nesting” yet. I do love ‘em but they’re not African Grey’s in the IQ department. Anyhow, I think I finally understand why people hide eggs in the garden at Easter. Because ducks used to do it for them! I suppose, for millennia, this has been the season to go hunting for eggs. Now we just substitute chocolate ones instead. For the moment, I’ve kept them in a cool shady spot while I keep an eye out for an actual nest. If a polecat doesn’t find them first, I may be able to slip them onto the nest in time for them to get hatched along with some cousins.

    Read the article

  • Improved Customer Experience, but at what Cost?

    - by Tony Berk
    We can all probably agree that improving your customers' experience is a good thing. But a key question many people are asking is will it help your organization and, in particular, what are the financial benefits?That's a good question, especially when companies ARE experiencing phenomenal return on investment (ROI). Of course, there are many factors that impact ROI or other measures of success, but we'd like to share some success stories as examples of customer experience in action and delivering positive results. If you would like to learn more about the economics of customer experience, see Brian Curran's presentation at the Oracle Customer Experience Summit last month. In this series of blog posts, we'll share actual customer stories. Today's example is Dell, which uses Oracle Real-Time Decisions (RTD) and Siebel CRM as part of their customer experience portfolio to better understand their customers' needs and wants and provide consistent interactions. Regular readers of this blog are probably familiar with Siebel, but RTD may be new to many of you. RTD is a complete decision management solution that delivers real-time decisions and recommendations and automatically renders decisions within a business process to create tailored messaging for every customer interaction.What does that mean? In the video below, Dell describes how customer experience is important not just for one interaction channel, but across all "vehicles." RTD is helping Dell understand customer behavior and communicate with the customer in a more relevant manner, across all communication  or interaction channels including sales and service call centers, email marketing and online. Dell continues to expand use of RTD because the benefits are showing up in sales, service and marketing results including 19% increase in close rates, faster issue resolution and 40% improvement in revenue per click in email marketing. Click here, to learn more about Oracle Customer Experience and stay tuned for more customer spotlights.

    Read the article

  • Custom inventory items based on inheritance

    - by Bogdan Marginean
    So, here's the scenario: I'm building an RPG. Like most of the other RPGs on the market, my game will feature an inventory and of course, inventory items. So far I've worked well with using a single class for all items, because I did not need anything else than character stat alteration on item usage (consumption). However, I'd like some items to have a more exotic effect. Think of something like when the user consumes a transformation potion, he automatically turns into a beast. In order to achieve this I've thought about declaring a new class that inherits from BaseItem for each item. Each descendant would override some methods (like void OnConsume()), to change the base behavior. This works fine, but when it comes to inventory management, I have some issues. The actual inventory will have to work with BaseItem components only (for obvious reasons, as it's an enumerable collection of objects of the same type); casting any descendant to the base class is possible, so no problems in adding items to the inventory. But how can I keep track of the descendant's type (class) for each item in the inventory? And how to perform the descendant's OnConsume from withint he inventory, for each item? Let me know if you can think of a better solution than mine, or if you can think of a solution to my problem only. Development is done in C#, inside Unity 3.5. Thanks!

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Biggest mistake you've ever made

    - by Rogue Coder
    Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever…

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • Best practice with pyGTK and Builder XML files

    - by Phoenix87
    I usually design GUI with Glade, thus producing a series of Builder XML files (one such file for each application window). Now my idea is to define a class, e.g. MainWindow, that inherits from gtk.Window and that implements all the signal handlers for the application main window. The problem is that when I retrieve the main window from the containing XML file, it is returned as a gtk.Window instance. The solution I have adopted so far is the following: I have defined a class "Window" in the following way class Window(): def __init__(self, win_name): builder = gtk.Builder() self.builder = builder builder.add_from_file("%s.glade" % win_name) self.window = builder.get_object(win_name) builder.connect_signals(self) def run(self): return self.window.run() def show_all(self): return self.window.show_all() def destroy(self): return self.window.destroy() def child(self, name): return self.builder.get_object(name) In the actual application code I have then defined a new class, say MainWindow, that inherits frow Window, and that looks like class Main(Window): def __init__(self): Window.__init__(self, "main") ### Signal handlers ##################################################### def on_mnu_file_quit_activated(self, widget, data = None): ... The string "main" refers to the main window, called "main", which resides into the XML Builder file "main.glade" (this is a sort of convention I decided to adopt). So the question is: how can I inherit from gtk.Window directly, by defining, say, the class Foo(gtk.Window), and recast the return value of builder.get_object(win_name) to Foo?

    Read the article

  • Disable pasting in a textbox using jQuery

    - by Michel Grootjans
    I had fun writing this one My current client asked me to allow users to paste text into textboxes/textareas, but that the pasted text should be cleaned from '<...>' tags. Here's what we came up with: $(":input").bind('paste', function(e) { var el = $(this); setTimeout(function() { var text = $(el).val(); $(el).val(text.replace(/<(.*?)>/gi, '')); }, 100); }) ; This is so simple, I'm amazed. The first part just binds a function to the paste operation applied to any input  declared on the page. $(":input").bind('paste', function(e) {...}); In the first line, I just capture the element. Then wait for 100ms setTimeout(function() {....}, 100); then get the actual value from the textbox, and replace it with a regular expression that basically means replace everything that looks like '<{0}>' with ''. gi at the end are regex arguments in javascript. /<(.*?)>/gi

    Read the article

  • Collection RemoveAll Extension Method

    - by João Angelo
    I had previously posted a RemoveAll extension method for the Dictionary<K,V> class, now it’s time to have one for the Collection<T> class. The signature is the same as in the corresponding method already available in List<T> and the implementation relies on the RemoveAt method to perform the actual removal of each element. Finally, here’s the code: public static class CollectionExtensions { /// <summary> /// Removes from the target collection all elements that match the specified predicate. /// </summary> /// <typeparam name="T">The type of elements in the target collection.</typeparam> /// <param name="collection">The target collection.</param> /// <param name="match">The predicate used to match elements.</param> /// <exception cref="ArgumentNullException"> /// The target collection is a null reference. /// <br />-or-<br /> /// The match predicate is a null reference. /// </exception> /// <returns>Returns the number of elements removed.</returns> public static int RemoveAll<T>(this Collection<T> collection, Predicate<T> match) { if (collection == null) throw new ArgumentNullException("collection"); if (match == null) throw new ArgumentNullException("match"); int count = 0; for (int i = collection.Count - 1; i >= 0; i--) { if (match(collection[i])) { collection.RemoveAt(i); count++; } } return count; } }

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • Search in Projects API

    - by Geertjan
    Today I got some help from Jaroslav Havlin, the creator of the new "Search in Projects API". Below are the steps to create a search provider that finds recently modified files, via a new tab in the "Find in Projects" dialog: Here's how to get to the above result. Create a new NetBeans module project named "RecentlyModifiedFilesSearch". Then set dependencies on these libraries: Search in Projects API Lookup API Utilities API Dialogs API Datasystems API File System API Nodes API Create and register an implementation of "SearchProvider". This class tells the application the name of the provider and how it can be used. It should be registered via the @ServiceProvider annotation.Methods to implement: Method createPresenter creates a new object that is added to the "Find in Projects" dialog when it is opened. Method isReplaceSupported should return true if this provider support replacing, not only searching. If you want to disable the search provider (e.g., there aren't required external tools available in the OS), return false from isEnabled. Method getTitle returns a string that will be shown in the tab in the "Find in Projects" dialog. It can be localizable. Example file "org.netbeans.example.search.ExampleSearchProvider": package org.netbeans.example.search; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Recent Files Search"; } } Next, we need to create a SearchProvider.Presenter. This is an object that is passed to the "Find in Projects" dialog and contains a visual component to show in the dialog, together with some methods to interact with it.Methods to implement: Method getForm returns a JComponent that should contain controls for various search criteria. In the example below, we have controls for a file name pattern, search scope, and the age of files. Method isUsable is called by the dialog to check whether the Find button should be enabled or not. You can use NotificationLineSupport passed as its argument to set a display error, warning, or info message. Method composeSearch is used to apply the settings and prepare a search task. It returns a SearchComposition object, as shown below. Please note that the example uses ComponentUtils.adjustComboForFileName (and similar methods), that modifies a JComboBox component to act as a combo box for selection of file name pattern. These methods were designed to make working with components created in a GUI Builder comfortable. Remember to call fireChange whenever the value of any criteria changes. Example file "org.netbeans.example.search.ExampleSearchPresenter": package org.netbeans.example.search; import java.awt.FlowLayout; import javax.swing.BoxLayout; import javax.swing.JComboBox; import javax.swing.JComponent; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JSlider; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener; import org.netbeans.api.search.SearchScopeOptions; import org.netbeans.api.search.ui.ComponentUtils; import org.netbeans.api.search.ui.FileNameController; import org.netbeans.api.search.ui.ScopeController; import org.netbeans.api.search.ui.ScopeOptionsController; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.openide.NotificationLineSupport; import org.openide.util.HelpCtx; public class ExampleSearchPresenter extends SearchProvider.Presenter { private JPanel panel = null; ScopeOptionsController scopeSettingsPanel; FileNameController fileNameComboBox; ScopeController scopeComboBox; ChangeListener changeListener; JSlider slider; public ExampleSearchPresenter(SearchProvider searchProvider) { super(searchProvider, false); } /** * Get UI component that can be added to the search dialog. */ @Override public synchronized JComponent getForm() { if (panel == null) { panel = new JPanel(); panel.setLayout(new BoxLayout(panel, BoxLayout.PAGE_AXIS)); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); JPanel row2 = new JPanel(new FlowLayout(FlowLayout.LEADING)); JPanel row3 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Age in hours: ")); slider = new JSlider(1, 72); row1.add(slider); final JLabel hoursLabel = new JLabel(String.valueOf(slider.getValue())); row1.add(hoursLabel); row2.add(new JLabel("File name: ")); fileNameComboBox = ComponentUtils.adjustComboForFileName(new JComboBox()); row2.add(fileNameComboBox.getComponent()); scopeSettingsPanel = ComponentUtils.adjustPanelForOptions(new JPanel(), false, fileNameComboBox); row3.add(new JLabel("Scope: ")); scopeComboBox = ComponentUtils.adjustComboForScope(new JComboBox(), null); row3.add(scopeComboBox.getComponent()); panel.add(row1); panel.add(row3); panel.add(row2); panel.add(scopeSettingsPanel.getComponent()); initChangeListener(); slider.addChangeListener(new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { hoursLabel.setText(String.valueOf(slider.getValue())); } }); } return panel; } private void initChangeListener() { this.changeListener = new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { fireChange(); } }; fileNameComboBox.addChangeListener(changeListener); scopeSettingsPanel.addChangeListener(changeListener); slider.addChangeListener(changeListener); } @Override public HelpCtx getHelpCtx() { return null; // Some help should be provided, omitted for simplicity. } /** * Create search composition for criteria specified in the form. */ @Override public SearchComposition<?> composeSearch() { SearchScopeOptions sso = scopeSettingsPanel.getSearchScopeOptions(); return new ExampleSearchComposition(sso, scopeComboBox.getSearchInfo(), slider.getValue(), this); } /** * Here we return always true, but could return false e.g. if file name * pattern is empty. */ @Override public boolean isUsable(NotificationLineSupport notifySupport) { return true; } } The last part of our search provider is the implementation of SearchComposition. This is a composition of various search parameters, the actual search algorithm, and the displayer that presents the results.Methods to implement: The most important method here is start, which performs the actual search. In this case, SearchInfo and SearchScopeOptions objects are used for traversing. These objects were provided by controllers of GUI components (in the presenter). When something interesting is found, it should be displayed (with SearchResultsDisplayer.addMatchingObject). Method getSearchResultsDisplayer should return the displayer associated with this composition. The displayer can be created by subclassing SearchResultsDisplayer class or simply by using the SearchResultsDisplayer.createDefault. Then you only need a helper object that can create nodes for found objects. Example file "org.netbeans.example.search.ExampleSearchComposition": package org.netbeans.example.search; public class ExampleSearchComposition extends SearchComposition<DataObject> { SearchScopeOptions searchScopeOptions; SearchInfo searchInfo; int oldInHours; SearchResultsDisplayer<DataObject> resultsDisplayer; private final Presenter presenter; AtomicBoolean terminated = new AtomicBoolean(false); public ExampleSearchComposition(SearchScopeOptions searchScopeOptions, SearchInfo searchInfo, int oldInHours, Presenter presenter) { this.searchScopeOptions = searchScopeOptions; this.searchInfo = searchInfo; this.oldInHours = oldInHours; this.presenter = presenter; } @Override public void start(SearchListener listener) { for (FileObject fo : searchInfo.getFilesToSearch( searchScopeOptions, listener, terminated)) { if (ageInHours(fo) < oldInHours) { try { DataObject dob = DataObject.find(fo); getSearchResultsDisplayer().addMatchingObject(dob); } catch (DataObjectNotFoundException ex) { listener.fileContentMatchingError(fo.getPath(), ex); } } } } @Override public void terminate() { terminated.set(true); } @Override public boolean isTerminated() { return terminated.get(); } /** * Use default displayer to show search results. */ @Override public synchronized SearchResultsDisplayer<DataObject> getSearchResultsDisplayer() { if (resultsDisplayer == null) { resultsDisplayer = createResultsDisplayer(); } return resultsDisplayer; } private SearchResultsDisplayer<DataObject> createResultsDisplayer() { /** * Object to transform matching objects to nodes. */ SearchResultsDisplayer.NodeDisplayer<DataObject> nd = new SearchResultsDisplayer.NodeDisplayer<DataObject>() { @Override public org.openide.nodes.Node matchToNode( final DataObject match) { return new FilterNode(match.getNodeDelegate()) { @Override public String getDisplayName() { return super.getDisplayName() + " (" + ageInMinutes(match.getPrimaryFile()) + " minutes old)"; } }; } }; return SearchResultsDisplayer.createDefault(nd, this, presenter, "less than " + oldInHours + " hours old"); } private static long ageInMinutes(FileObject fo) { long fileDate = fo.lastModified().getTime(); long now = System.currentTimeMillis(); return (now - fileDate) / 60000; } private static long ageInHours(FileObject fo) { return ageInMinutes(fo) / 60; } } Run the module, select a node in the Projects window, press Ctrl-F, and you'll see the "Find in Projects" dialog has two tabs, the second is the one you provided above:

    Read the article

  • Create Pivot collections much faster than DeepZoomTools CollectionCreator class

    - by John Conwell
    I've been playing with Microsoft Live Labs Pivot to create a hierarchy of collections all linked together to allow someone to explore a hierarchy of data visually. The problem has been the generation time of the entire hierarchy. I end up creating 500 - 600 collections total and it takes hours and hours using the CollectionCreator class that comes with the DeepZoomTools.  So digging around I found a way to make the actual DeepZoom collection creation wicked fast. Dont use the CollectionCreator!  Turns out Pivot doesnt actually use the image pyramid generated by the CollectionCreator. Or if it does, its only when you open a new collection it shows all the images zooming in. But once the zoom in is complete, Pivot uses the individual DeepZoom images. What Pivot does need is the xml generated by the CollectionCreator, which is in a very simple format.  So what i did was manually generate the xml for the collection image pyramid, and then create the folder structure required (one folder per level of the pyramid), and put a single pixel png file in each folder.  Now, I can create the required files and folders for 500 collections in about 10 seconds. Sweet! Now you still have to use the ImageCreator to create a DeepZoom image for each image in the collection and that still takes some time, but at least the total processing time is way better.

    Read the article

  • How to dissuade a customer who just learned a technology and wants to use it everywhere?

    - by MainMa
    My customer recently discovered what is URL Rewriting, without completely understanding what it is, how it works and the pros and cons of it. Now, he asks for lots of strange changes in actual requirements of current projects and changes in old projects in order to implement what he believes is URL Rewriting. On one hand, I'm annoyed being asked to do things which doesn't make any sense instead of doing real work. On the other hand, I can't tell my customer that he doesn't understand anything in the subject despite his interest in it. I think many people have had situations when their manager or their customer just learned a new buzzword or a new technology, and he loved it so much than he wanted to use it in every project, everywhere, rewrite the whole codebase just to use this new thing, etc. Also, I've recently read something related on Programmers.SE where people told about their experiences when there was a huge buzz around XML, and some managers would ask to introduce XML in every project just to show to everyone that they have used it. So those who have been in similar situation, how have you managed it?

    Read the article

  • Scrolling a WriteableBitmap

    - by Skoder
    I need to simulate my background scrolling but I want to avoid moving my actual image control. Instead, I'd like to use a WriteableBitmap and use a blitting method. What would be the way to simulate an image scrolling upwards? I've tried various things buy I can't seem to get my head around the logic: //X pos, Y pos, width, height Rect src = new Rect(0, scrollSpeed , 480, height); Rect dest = new Rect(0, 700 - scrollSpeed , 480, height); //destination rect, source WriteableBitmap, source Rect, blend mode wb.Blit(destRect, wbSource, srcRect, BlendMode.None); scrollSpeed += 5; if (scrollSpeed > 700) scrollSpeed = 0; If height is 10, the image is quite fuzzy and moreso if the height is 1. If the height is a taller, the image is clearer, but it only seems to do a one to one copy. How can I 'scroll' the image so that it looks like it's moving up in a continuous loop? (The height of the screen is 700).

    Read the article

  • Documenting mathematical logic in code

    - by Kiril Raychev
    Sometimes, although not often, I have to include math logic in my code. The concepts used are mostly very simple, but the resulting code is not - a lot of variables with unclear purpose, and some operations with not so obvious intent. I don't mean that the code is unreadable or unmaintainable, just that it's waaaay harder to understand than the actual math problem. I try to comment the parts which are hardest to understand, but there is the same problem as in just coding them - text does not have the expressive power of math. I am looking for a more efficient and easy to understand way of explaining the logic behind some of the complex code, preferably in the code itself. I have considered TeX - writing the documentation and generating it separately from the code. But then I'd have to learn TeX, and the documentation will not be in the code itself. Another thing I thought of is taking a picture of the mathematical notations, equations and diagrams written on paper/whiteboard, and including it in javadoc. Is there a simpler and clearer way? P.S. Giving descriptive names(timeOfFirstEvent instead of t1) to the variables actually makes the code more verbose and even harder too read.

    Read the article

  • cpufreq-selector, cpufreq-info reporting wrong max speed

    - by dty
    Hi, I've just built a new machine with a Core i5-760 CPU. Max speed is 2.80GHz (turbo mode notwithstanding). I've also done a vanilla install of Ubuntu 10.10. I've added the cpufreq-selector applet to the top panel, and its menus only allow me to select up to 2.39GHz. If I select the "performance" governor, it also shows 2.39GHz. cpufreq-info reports: $ cpufreq-info cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 2 3 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 1.20 GHz - 2.39 GHz available frequency steps: 2.39 GHz, 2.26 GHz, 2.13 GHz, 2.00 GHz, 1.86 GHz, 1.73 GHz, 1.60 GHz, 1.46 GHz, 1.33 GHz, 1.20 GHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 1.20 GHz and 2.39 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 1.20 GHz. cpufreq stats: 2.39 GHz:13.74%, 2.26 GHz:0.09%, 2.13 GHz:0.08%, 2.00 GHz:0.08%, 1.86 GHz:0.07%, 1.73 GHz:0.07%, 1.60 GHz:0.08%, 1.46 GHz:0.08%, 1.33 GHz:0.11%, 1.20 GHz:85.61% (15560) [...CPUs 1-3 elided, but similar...] Any idea how to get Ubuntu and the various tools to recognise this as a 2.80GHz processor? And ideally to report the actual speed when running in turbo mode too, but that's not critical. Edit: I should probably add that the BIOS (& Windows) are quite happy that it's a 2.80GHz CPU. Thanks.

    Read the article

  • Why isn't there a typeclass for functions?

    - by Steve314
    I already tried this on Reddit, but there's no sign of a response - maybe it's the wrong place, maybe I'm too impatient. Anyway... In a learning problem I've been messing around with, I realised I needed a typeclass for functions with operations for applying, composing etc. Reasons... It can be convenient to treat a representation of a function as if it were the function itself, so that applying the function implicitly uses an interpreter, and composing functions derives a new description. Once you have a typeclass for functions, you can have derived typeclasses for special kinds of functions - in my case, I want invertible functions. For example, functions that apply integer offsets could be represented by an ADT containing an integer. Applying those functions just means adding the integer. Composition is implemented by adding the wrapped integers. The inverse function has the integer negated. The identity function wraps zero. The constant function cannot be provided because there's no suitable representation for it. Of course it doesn't need to spell things as if it the values were genuine Haskell functions, but once I had the idea, I thought a library like that must already exist and maybe even using the standard spellings. But I can't find such a typeclass in the Haskell library. I found the Data.Function module, but there's no typeclass - just some common functions that are also available from the Prelude. So - why isn't there a typeclass for functions? Is it "just because there isn't" or "because it's not so useful as you think"? Or maybe there's a fundamental problem with the idea? The biggest possible problem I've thought of so far is that function application on actual functions would probably have to be special-cased by the compiler to avoid a looping problem - in order to apply this function I need to apply the function application function, and to do that I need to call the function application function, and to do that...

    Read the article

  • Xna GS 4 Animation Sample bone transforms not copying correctly

    - by annonymously
    I have a person model that is animated and a suitcase model that is not. The person can pick up the suitcase and it should move to the location of the hand bone of the person model. Unfortunately the suitcase doesn't follow the animation correctly. it moves with the hand's animation but its position is under the ground and way too far to the right. I haven't scaled any of the models myself. Thank you. The source code (forgive the rough prototype code): Matrix[] tran = new Matrix[man.model.Bones.Count];// The absolute transforms from the animation player man.model.CopyAbsoluteBoneTransformsTo(tran); Vector3 suitcasePos, suitcaseScale, tempSuitcasePos = new Vector3();// Place holders for the Matrix Decompose Quaternion suitcaseRot = new Quaternion(); // The transformation of the right hand bone is decomposed tran[man.model.Bones["HPF_RightHand"].Index].Decompose(out suitcaseScale, out suitcaseRot, out tempSuitcasePos); suitcasePos = new Vector3(); suitcasePos.X = tempSuitcasePos.Z;// The axes are inverted for some reason suitcasePos.Y = -tempSuitcasePos.Y; suitcasePos.Z = -tempSuitcasePos.X; suitcase.Position = man.Position + suitcasePos;// The actual Suitcase properties suitcase.Rotation = man.Rotation + new Vector3(suitcaseRot.X, suitcaseRot.Y, suitcaseRot.Z); I am also copying the bone transforms from the animation player in the Person class like so: // The transformations from the AnimationPlayer Matrix[] skinTrans = new Matrix[model.Bones.Count]; skinTrans = player.GetBoneTransforms(); // copy each transformation to its corresponding bone for (int i = 0; i < skinTrans.Length; i++) { model.Bones[i].Transform = skinTrans[i]; }

    Read the article

  • How to break the "php is a bad language" paradigm? [closed]

    - by dukeofgaming
    PHP is not a bad language (or at least not as bad as some may suggest). I had teachers that didn't even know PHP was object oriented until I told them. I've had clients that immediately distrust us when we say we are PHP developers and question us for not using chic languages and frameworks such as Django or RoR, or "enterprise and solid" languages such as Java and ASP.NET. Facebook is built on PHP. There are plenty of solid projects that power the web like Joomla and Drupal that are used in the enterprise and governments. There are frameworks and libraries that have some of the best architectures I've seen across all languages (Symfony 2, Doctrine). PHP has the best documentation I've seen and a big community of professionals. PHP has advanced OO features such as reflection, interfaces, let alone that PHP now supports horizontal reuse natively and cleanly through traits. There are bad programmers and script kiddies that give PHP a bad reputation, but power the PHP community at the same time, and because it is so easy to get stuff done PHP you can often do things the wrong way, granted, but why blame the language?. Now, to boil this down to an actual answerable question: what would be a good and solid and short and sweet argument to avoid being frowned upon and stop prejudice in one fell swoop and defend your honor when you say you are a PHP developer?. (free cookie with teh whipped cream to those with empirical evidence of convincing someone —client or other— on the spot) P.S.: We use Symfony, and the code ends being beautiful and maintainable

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • Oracle Tutor: Document Audit and Maintenance

    - by Emily Chorba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Perhaps the most critical phase in the process of documenting policies and procedure -- and the greatest challenge to owners -- is the maintenance of published documents. Documents must reflect current practice and they must be accurate. The most effective way to ensure this is through the regular audit of documents. In the Tutor environment, a Document Owner must audit each of his/her documents once every 6 to 12 months to verify that the document reflects actual practice. If it does not, the document is updated or employees are retrained (depending on the nature of the discrepancy). If a document update is required, the Tutor system enables the owner to modify and redistribute the document within one work day. This is possible because: Documents contain a minimum of detail, thereby reducing the edits. Document format and structure are simple, so changes are easy to identify The Tutor Author software tool enables the Document Owner or the Document Administrator to update the file quickly. The Document Administrator verifies the document format and integration, publishes the document, and distributes it to all affected employees, thereby freeing the Document Owner of the more tedious tasks. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & UPK

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • Missing Package: header, Problem with MergeList, The package lists or status file could not be parsed or opened

    - by Inbar Rose
    THIS IS NOT A DUPLICATE OF SIMILAR QUESTIONS (like this) I just had to write that first, there are tons of questions similar to this, all of them have the same redirect to an answer that does not solve my problem, because I don't have the same problem, just the same symptom. I write tests for my companies application. One of these tests tries to upgrade the application from a previous version to a new version to make sure nothing breaks. When I am installing an old version of the application, some weird stuff starts to happen. Sometimes everything goes Okay, and nothing is wrong, other times when trying to install I get this message (company app name censored): E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/XXX-amd64_Packages E: The package lists or status file could not be parsed or opened. Using the solutions provided in the questions similar to this one (like this). Do not help, and the problem keeps repeating itself once it happens the first time. This has led me to believe something is wrong on the apt server where the package is being created, but searching these errors yields no information on anything beyond the "fix" suggested in the question I linked, the only other source of information I could find also did not help (here): So I am asking for information; What is the actual problem? What causes the problem? What can fix the problem? I hope this question is in good format, if there is a problem, or missing information I can move to chat.

    Read the article

  • Are the Ubuntu ISO images updated From release .ubuntu.com

    - by tijybba
    Just got idea from this(may not be related though) question however. Are the ISO images from the official site updated with updates in Core Ubuntu system , like Kernel Updates , desktop Environment Updates(unity), i mean Updates of BASE system including X-org, Office suite, Package Manager , Update manager or Gnome Base Modules, those released in update Branches like precise-Updates branch. The reason i am asking this is , if i download the ISO image of Ubuntu 12.04 Say after two or three months from release , i have to do an update of approximately 200~300 MB's size. So why are these ISO images not updated to recent updates, i am aware that all of the components are not updated at the same time , but let's say after One month from actual release ( Both LTS and normal releases), the updated components can be added to form a Updated ISO in regular intervals, which provides new users to use latest versions and features with improved stability and less bandwidth Consumption. I am not mentioning the idea of comparison to Rolling Release , or External PPA's added updates , and neither Netinstal but the ISO of updated Packages .This can be provided as optional download. Since my question is within the boundary of Official updates releases so stability could not be the reason. I guess there are custom packagers out there , but having an official option would be better. It helps in distributing Newest ISO OS which impresses a lot new users , since it makes availability of newer features and a faster system ofcourse. Another reason of asking this is here. Edit: Since almost all new (Desktop) users download the Default ISO's having one or few issue , which may have been corrected in following updates. But most of new laptop users i encountered gave up because of it , so should i suggest ,for laptop not listed on Certified H/W list , to try daily Builds , if needed.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >