Search Results

Search found 4126 results on 166 pages for 'bitwise operations'.

Page 23/166 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Short Look at Frends Helium 2.0 Beta

    - by mipsen
    Pekka from Frends gave me the opportunity to have a look at the beta-version of their Helium 2.0. For all of you, who don't know the tool: Helium is a web-application that collects management-data from BizTalk which you usually have to tediously collect yourself, like performance-data (throttling, throughput (like completed Orchestrations/hour), other perfomance-counters) and data about the state of BTS-Applications and presents the data in clearly structured diagrams and overviews which (often) even allow drill-down.  Installing Helium 2 was quite easy. It comes as an msi-file which creates the web-application on IIS. Aditionally a windows-service is deployt which acts as an agent for sending alert-e-mails and collecting data. What I missed during installation was a link to the created web-app at the end, but the link can be found under Program Files/Frends... On the start-page Helium shows two sections: An overview about the BTS-Apps (Running?, suspended messages?) Basic perfomance-data You can drill-down into the BTS-Apps further, to see ReceiveLocations, Orchestrations and SendPorts. And then a very nice feature can be activated: You can set a monitor to each of the ports and/or orchestrations and have an e-mail sent when a threshold of executions/day or hour is not met. I think this is a great idea. The following screeshot shows the configuration of this option. Conclusion: Helium is a useful monitoring  tool for BTS-operations that might save a lot of time for collecting data, writing a tool yourself or documentation for the operations-staff where to find the data. Pros: Simple installation Most important data for BTS-operations in one place Monitor for alerts, if throughput is not met Nice Web-UI Reasonable price Cons: Additional Perormance-counters cannot be added Im am not sure when the final version is to be shipped, but you can see that on Frend's homepage soon, I guess... A trial version is available here

    Read the article

  • Out-of-the-Box Integration Links Primavera Solutions with PeopleSoft Projects Applications

    - by Sylvie MacKenzie, PMP
    In a move that brings best-in-class enterprise project portfolio management to Oracle’s PeopleSoft enterprise resource planning customers, Oracle announced the integration of Oracle’s PeopleSoft projects applications and Oracle’s Primavera P6 Enterprise Project Portfolio Management. The combination of PeopleSoft financial controls and Primavera portfolio management capabilities brings greater oversight of end-to-end processes to help organizations improve the planning and execution efforts needed to deliver projects on time and within budget. “As an organization with many high-value, project-driven initiatives, we are very pleased to see Oracle’s investment in this important integration,” says Janardhanan Sankar, senior vice president for technology and quality at ITC Infotech India Ltd. Oracle’s PeopleSoft projects applications enable project-centric organizations and departments to establish core operational processes for full project lifecycle management across operations and finance. The integration with Primavera P6 Enterprise Project Portfolio Management means organizations can eliminate costly and difficult-to-maintain proprietary integrations. Organizations can also standardize on the Oracle technologies to Align back-office budgets and costs with project operations to help ensure accurate forecasting of costs, resources, and schedules Provide an accurate single source of truth to financial managers and analysts using Oracle’s PeopleSoft projects applications, and to project managers using Primavera P6 Enterprise Project Portfolio Management  Enhance project collaboration and execution by having all users utilizing common solutions to communicate, plan, and deliver projects “By bringing together Oracle’s PeopleSoft projects applications and Oracle’s Primavera P6 Enterprise Project Portfolio Management, we are able to provide customers with the infrastructure they need to achieve a single source of truth on the projects they are managing,” says Paco Aubrejuan, Oracle’s group vice president and general manager, PeopleSoft. “This real-time visibility drives profitability, increases productivity, and improves operations.” For more information, view the on-demand Webcast, “Bridging Business Processes for Optimal Portfolio Performance,” or read about the new integration.

    Read the article

  • Explicitly pass context object versus injecting with IoC

    - by SonOfPirate
    I have a layered service application where the service layer delegates operations into the domain layer for execution. Many of these operations need to know the context under which they are operation. (The context included the identity of the current user, culture information, etc. received from the caller.) For example, I have an API method that returns a list of announcements. The list is based on the current user's role and each announcement is localized to their culture. The API is a thin-facade that delegates to an Application Service in my domain layer. The Application Service method obviously needs to know the context of the current request/operation as another call to the same API from another user should result in a different list. Within this method, we also have logging that uses some of the context information so we a clear understanding of the context when the operation was performed (this is especially useful if something goes wrong.) While this is a contrived example, in the real world, my Application Services will coordinate operations with many collaborative components, any number of them also needing the context information. My choice is to pass the context to the Application Service which would then pass it with any calls to collaborators or have the IoC container satisfy the dependency the Application Service and any collaborators have on the context. I am wondering if it is considered good/bad, best practices/code smell, etc. if I pass the context object as a parameter to the domain methods or if injecting the context via an IoC container is preferred. (EDIT: I should mention that the context object is instantiated per-request.)

    Read the article

  • Which Graphics/Geometry abstraction to choose?

    - by Robz
    I've been thinking about the design for a browser app on the HTML5 canvas that simulates a 2D robot zooming around, sensing the world around it. I decided to do this from scratch just for fun. I need shapes, like polygons, circles, and lines in order to model the robot and the world it lives in. These shapes need to be drawn with different appearance attributes, like border/fill style/width/color. I also need to have geometry functions to detect intersections and containment for the robot's sensors and so that the robot doesn't go inside stuff. One idea for functions is to have two totally separate libraries, one to implement graphics (like drawShape(context, shape)) and one for geometry operations (like shapeIntersectsShape(shape1, shape2)). Or, in a more object-oriented approach, the shape objects themselves could implement methods to do their own graphics (shape.draw(context)) and geometry operations (shape1.intersects(shape2)). Then there is the data itself: whether the data to draw a shape and the data to do geometric operations on that shape should be encapsulated within the same object, or be separate structures (where one would contain the other, or both be contained inside another structure). How do existing applications that do graphics/geometry stuff deal with this? Is there one model that is best, or is each good for certain applications? Should the fact that I'm using Javascript instead of a more classical language change how I approach the design?

    Read the article

  • What is logical cohesion, and why is it bad or undesirable?

    - by Matt Fenwick
    From the c2wiki page on coupling & cohesion: Cohesion (interdependency within module) strength/level names : (from worse to better, high cohesion is good) Coincidental Cohesion : (Worst) Module elements are unrelated Logical Cohesion : Elements perform similar activities as selected from outside module, i.e. by a flag that selects operation to perform (see also CommandObject). i.e. body of function is one huge if-else/switch on operation flag Temporal Cohesion : operations related only by general time performed (i.e. initialization() or FatalErrorShutdown?()) Procedural Cohesion : Elements involved in different but sequential activities, each on different data (usually could be trivially split into multiple modules along linear sequence boundaries) Communicational Cohesion : unrelated operations except need same data or input Sequential Cohesion : operations on same data in significant order; output from one function is input to next (pipeline) Informational Cohesion: a module performs a number of actions, each with its own entry point, with independent code for each action, all performed on the same data structure. Essentially an implementation of an abstract data type. i.e. define structure of sales_region_table and its operators: init_table(), update_table(), print_table() Functional Cohesion : all elements contribute to a single, well-defined task, i.e. a function that performs exactly one operation get_engine_temperature(), add_sales_tax() (emphasis mine). I don't fully understand the definition of logical cohesion. My questions are: what is logical cohesion? Why does it get such a bad rap (2nd worst kind of cohesion)?

    Read the article

  • Design Pattern for Skipping Steps in a Wizard

    - by Eric J.
    I'm designing a flexible Wizard system that presents a number of screens to complete a task. Some screens may need to be skipped based on answers to prompts on one or more previous screens. The conditions to skip a given screen need to be editable by a non-technical user via a UI. Multiple conditions need only be combined with and. I have an initial design in mind, but it feels inelegant. I wonder if there's a better way to approach this class of problem. Initial Design UI where The first column allows the user to select a question from a previous screen. The second column allows the user to select an operator applicable to the type of question asked. The third column allows the user to enter one or more values depending on the selected operator. Object Model public enum Operations { ... } public class Condition { int QuestionId { get; set; } Operations Operation { get; set; } List<object> Parameters { get; private set; } } List<Condition> pageSkipConditions; Controller Logic bool allConditionsTrue = pageSkipConditions.Count > 0; foreach (Condition c in pageSkipConditions) { allConditionsTrue &= Evaluate(previousAnswers, c); } // ... private bool Evaluate(List<Answers> previousAnswers, Condition c) { switch (c.Operation) { case Operations.StartsWith: // logic for this operation // etc. } }

    Read the article

  • Uralelektrostroy Improves Turnaround Times for Engineering and Construction Projects by Approximately 50% with Better Project Data Management

    - by Melissa Centurio Lopes
    LLC Uralelektrostroy was established in 1998, to meet the growing demand for reliable energy supply, which included the deployment and operation of a modern power grid system for Russia’s booming economy and industrial sector. To rise to the challenge, the country required a company with a strong reputation and the ability to strategically operate energy production and distribution facilities. As a renowned energy expert, Uralelektrostroy successfully embarked on the mission—focusing on the design, construction, and operation of power grids, transmission lines, and generation facilities. Today, Uralelektrostroy leads the Russian utilities industry with operations across the country, particularly in the Ural, Western Siberia, and Moscow regions. Challenges: Track work progress through all engineering project development stages with ease—from planning and start-up operations, to onsite construction and quality assurance—to enhance visibility into complex projects, such as power grid and power-transmission-line construction Implement and execute engineering projects faster—for example, designing and building power generation and distribution facilities—by better monitoring numerous local subcontractors Improve alignment of project schedules with project owners’ requirements—awarding federal and regional authorities—to avoid incurring fines for missing deadlines Solutions: Used Oracle’s Primavera P6 Enterprise Project Portfolio Management 8.1 to streamline communication with customers and subcontractors through better data management and harmonized reporting, reducing construction project implementation and turnaround times by approximately 50%, on average Enabled fast generation of work-in-progress reports that track project schedules, budgets, materials, and staffing—from approval and material procurement, to construction and delivery Reduced the number of construction sites by nearly 30% (from 35 to 25) by identifying unprofitable sites—streamlining operations at the company’s construction site network and increasing profitability Improved project visibility by enabling managers to efficiently track project status, ensuring on-time reporting and punctual project deliveries to federal customers to reduce delay penalties to zero “Oracle’s Primavera P6 Enterprise Project Portfolio Management 8.1 drastically changed the way we run our business. We’ve reduced the number of redundant assets, streamlined project implementation and execution, and improved collaboration with our customers and contractors. Overall, the Oracle deployment helped to increase our profitability.” – Roman Aleksandrovich Naumenko, Head of Information Technology, LLC Uralelektrostroy Read the complete customer snapshot here.

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • C programming multiple storage backends

    - by ahjmorton
    I am starting a side project in C which requires multiple storage backends to be driven by a particular piece of logic. These storage backends would each be linked with the decision of which one to use specified at runtime. So for example if I invoke my program with one set of parameters it will perform the operations in memory but if I change the program configuration it would write to disk. The underlying idea is that each storage backend should implement the same protocol. In other words the logic for performing operations should need to know which backend it is operating on. Currently the way I have thought to provide this indirection is to have a struct of function pointers with the logic calling these function pointers. Essentially the struct would contain all the operations needed to implement the higher level logic E.g. struct Context { void (* doPartOfDoOp)(void) int (* getResult)(void); } //logic.h void doOp(Context * context) { //bunch of stuff context->doPartOfDoOp(); } int getResult(Context * context) { //bunch of stuff return context->getResult(); } My questions is if this way of solving the problem is one a C programmer would understand? I am a Java developer by trade but enjoy using C/++. Essentially the Context struct provides an interface like level of indirection. However I would like to know if there is a more idiomatic way of achieving this.

    Read the article

  • AD - DirectoryServices: VBNET2.0 - Speaking architecture...

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • How to approach copying objects with smart pointers as class attributes?

    - by tomislav-maric
    From the boost library documentation I read this: Conceptually, smart pointers are seen as owning the object pointed to, and thus responsible for deletion of the object when it is no longer needed. I have a very simple problem: I want to use RAII for pointer attributes of a class that is Copyable and Assignable. The copy and assignment operations should be deep: every object should have its own copy of the actual data. Also, RTTI needs to be available for the attributes (their type may also be determined at runtime). Should I be searching for an implementation of a Copyable smart pointer (the data are small, so I don't need Copy on Write pointers), or do I delegate the copy operation to the copy constructors of my objects as shown in this answer? Which smart pointer do I choose for simple RAII of a class that is copyable and assignable? (I'm thinking that the unique_ptr with delegated copy/assignment operations to the class copy constructor and assignment operator would make a proper choice, but I am not sure) Here's a pseudocode for the problem using raw pointers, it's just a problem description, not a running C++ code: // Operation interface class ModelOperation { public: virtual void operate = (); }; // Implementation of an operation called Special class SpecialModelOperation : public ModelOperation { private: // Private attributes are present here in a real implementation. public: // Implement operation void operate () {}; }; // All operations conform to ModelOperation interface // These are possible operation names: // class MoreSpecialOperation; // class DifferentOperation; // Concrete model with different operations class MyModel { private: ModelOperation* firstOperation_; ModelOperation* secondOperation_; public: MyModel() : firstOperation_(0), secondOperation_(0) { // Forgetting about run-time type definition from input files here. firstOperation_ = new MoreSpecialOperation(); secondOperation_ = new DifferentOperation(); } void operate() { firstOperation_->operate(); secondOperation_->operate(); } ~MyModel() { delete firstOperation_; firstOperation_ = 0; delete secondOperation_; secondOperation_ = 0; } }; int main() { MyModel modelOne; // Some internal scope { // I want modelTwo to have its own set of copied, not referenced // operations, and at the same time I need RAII to work for it, // as soon as it goes out of scope. MyModel modelTwo (modelOne); } return 0; }

    Read the article

  • Fixing Robocopy for SQL Server Jobs

    - by Most Valuable Yak (Rob Volk)
    Robocopy is one of, if not the, best life-saving/greatest-thing-since-sliced-bread command line utilities ever to come from Microsoft.  If you're not using it already, what are you waiting for? Of course, being a Microsoft product, it's not exactly perfect. ;)  Specifically, it sets the ERRORLEVEL to a non-zero value even if the copy is successful.  This causes a problem in SQL Server job steps, since non-zero ERRORLEVELs report as failed. You can work around this by having your SQL job go to the next step on failure, but then you can't determine if there was a genuine error.  Plus you still see annoying red X's in your job history.  One way I've found to avoid this is to use a batch file that runs Robocopy, and I add some commands after it (in red): robocopy d:\backups \\BackupServer\BackupFolder *.bak rem suppress successful robocopy exit statuses, only report genuine errors (bitmask 16 and 8 settings)set/A errlev="%ERRORLEVEL% & 24" rem exit batch file with errorlevel so SQL job can succeed or fail appropriatelyexit/B %errlev% (The REM statements are simply comments and don't need to be included in the batch file) The SET command lets you use expressions when you use the /A switch.  So I set an environment variable "errlev" to a bitwise AND with the ERRORLEVEL value. Robocopy's exit codes use a bitmap/bitmask to specify its exit status.  The bits for 1, 2, and 4 do not indicate any kind of failure, but 8 and 16 do.  So by adding 16 + 8 to get 24, and doing a bitwise AND, I suppress any of the other bits that might be set, and allow either or both of the error bits to pass. The next step is to use the EXIT command with the /B switch to set a new ERRORLEVEL value, using the "errlev" variable.  This will now return zero (unless Robocopy had real errors) and allow your SQL job step to report success. This technique should also work for other command-line utilities.  The only issues I've found is that it requires the commands to be part of a batch file, so if you use Robocopy directly in your SQL job step you'd need to place it in a batch.  If you also have multiple Robocopy calls, you'll need to place the SET/A command ONLY after the last one.  You'd therefore lose any errors from previous calls, unless you use multiple "errlev" variables and AND them together. (I'll leave this as an exercise for the reader) The SET/A syntax also permits other kinds of expressions to be calculated.  You can get a full list by running "SET /?" on a command prompt.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SQL SERVER – Importance of User Without Login

    - by pinaldave
    Some questions are very open ended and it is very hard to come up with exact requirements. Here is one question I was asked in recent User Group Meeting. Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Answer: Let us visualize a scenario. An application has lots of different operations and many of them are very sensitive operations. The common practice was to do give application specific role which has more permissions and access level. When a regular user login (not system admin), he/she might have very restrictive permissions. The application itself had a user name and password which means applications can directly login into the database and perform the operation. Developers were well aware of the username and password as it was embedded in the application. When developer leaves the organization or when the password was changed, the part of the application had to be changed where the same username and passwords were used. Additionally, developers were able to use the same username and password and login directly to the same application. In earlier version of SQL Server there were application roles. The same is later on replaced by “User without Login”. Now let us recreate the above scenario using this new “User without Login”. In this case, User will have to login using their own credentials into SQL Server. This means that the user who is logged in will have his/her own username and password. Once the login is done in SQL Server, the user will be able to use the application. Now the database should have another User without Login which has all the necessary permissions and rights to execute various operations. Now, Application will be able to execute the script by impersonating “user without login – with more permissions”. Here there is assumed that user login does not have enough permissions and another user (without login) there are more rights. If a user knows how the application is using the database and their various operations, he can switch the context to user without login making him enable for doing further modification. Make sure to explicitly DENY view definition permission on the database. This will make things further difficult for user as he will have to know exact details to get additional permissions. If a user is System Admin all the details which I just mentioned in above three paragraphs does not apply as admin always have access to everything. Additionally, the method describes above is just one of the architecture and if someone is attempting to damage the system, they will still be able to figure out a workaround. You will have to put further auditing and policy based management to prevent such incidents and accidents. I guess this is my answer. I read it multiple times but I still feel that I am missing something. There should be more to this concept than what I have just described. I have merely described one scenario but there will be many more scenarios where this situation will be useful. Now is your turn to help – please leave a comment with the additional suggestion where exactly “User without Login” will be useful as well did I miss anything when I described above scenario. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Enum types, FlagAttribute & Zero value

    - by nmgomes
    We all know about Enums types and use them every single day. What is not that often used is to decorate the Enum type with the FlagsAttribute. When an Enum type has the FlagsAttribute we can assign multiple values to it and thus combine multiple information into a single enum. The enum values should be a power of two so that a bit set is achieved. Here is a typical Enum type: public enum OperationMode { /// <summary> /// No operation mode /// </summary> None = 0, /// <summary> /// Standard operation mode /// </summary> Standard = 1, /// <summary> /// Accept bubble requests mode /// </summary> Parent = 2 } In such scenario no values combination are possible. In the following scenario a default operation mode exists and combination is used: [Flags] public enum OperationMode { /// <summary> /// Asynchronous operation mode /// </summary> Async = 0, /// <summary> /// Synchronous operation mode /// </summary> Sync = 1, /// <summary> /// Accept bubble requests mode /// </summary> Parent = 2 } Now, it’s possible to do statements like: [DefaultValue(OperationMode.Async)] [TypeConverter(typeof(EnumConverter))] public OperationMode Mode { get; set; } /// <summary> /// Gets a value indicating whether this instance supports request from childrens. /// </summary> public bool IsParent { get { return (this.Mode & OperationMode.Parent) == OperationMode.Parent; } } or switch (this.Mode) { case OperationMode.Sync | OperationMode.Parent: Console.WriteLine("Sync,Parent"); break;[…]  But there is something that you should never forget: Zero is the absorber element for the bitwise AND operation. So, checking for OperationMode.Async (the Zero value) mode just like the OperationMode.Parent mode makes no sense since it will always be true: (this.Mode & 0x0) == 0x0 Instead, inverse logic should be used: OperationMode.Async = !OperationMode.Sync public bool IsAsync { get { return (this.Mode & ContentManagerOperationMode.Sync) != ContentManagerOperationMode.Sync; } } or public bool IsAsync { get { return (int)this.Mode == 0; } } Final Note: Benefits Allow multiple values combination The above samples snippets were taken from an ASP.NET control and enabled the following markup usage: <my:Control runat="server" Mode="Sync,Parent"> Drawback Zero value is the absorber element for the bitwise AND operation Be very carefully when evaluating the Zero value, either evaluate the enum value as an integer or use inverse logic.

    Read the article

  • I want to learn to program in SDL C++where do i start? I want to learn only what i need to to start making 2d games [on hold]

    - by user2644399
    Lazyfoo of Lazyfoo.net of the SDL 2d tutorial wrote that in order for me to start game programming in SDL, I need to know these concepts well; Operators, Controls, Loops, Functions, Structures, Arrays, References, Pointers, Classes, Objects how to use a template and Bitwise and/or. I want to know the fastest way to learn as much as I need of basic c++ that would allow me to make 2d games. Thanks in advance.

    Read the article

  • What is a good design pattern / lib for iOS 5 to synchronize with a web service?

    - by Junto
    We are developing an iOS application that needs to synchronize with a remote server using web services. The existing web services have an "operations" style rather than REST (implemented in WCF but exposing JSON HTTP endpoints). We are unsure of how to structure the web services to best fit with iOS and would love some advice. We are also interested in how to manage the synchronization process within iOS. Without going into detailed specifics, the application allows the user to estimate repair costs at a remote site. These costs are broken down by room and item. If the user has an internet connection this data can be sent back to the server. Multiple photographs can be taken of each item, but they will be held in a separate queue, which sends when the connection is optimal (ideally wifi). Our backend application controls the unique ids for each room and item. Thus, each time we send these costs to the server, the server echoes the central database ids back, thus, that they can be synchronized in the mobile app. I have simplified this a little, since the operations contract is actually much larger, but I just want to illustrate the basic requirements without complicating matters. Firstly, the web service architecture: We currently have two operations: GetCosts and UpdateCosts. My assumption is that if we used a strict REST architecture we would need to break our single web service operations into multiple smaller services. This would make the services much more chatty and we would also have to guarantee a delivery order from the app. For example, we need to make sure that containing rooms are added before the item. Although this seems much more RESTful, our perception is that these extra calls are expensive connections (security checks, database calls, etc). Does the type of web api (operation over service focus) determine chunky vs chatty? Since this is mobile (3G), are we better handling lots of smaller messages, or a few large ones? Secondly, the iOS side. What is the current advice on how to manage data synchronization within the iOS (5) app itself. We need multiple queues and we need to guarantee delivery order in each queue (and technically, ordering between queues). The server needs to control unique ids and other properties and echo them back to the application. The application then needs to update an internal database and when re-updating, make sure the correct ids are available in the update message (essentially multiple inserts and updates in one call). Our backend has a ton of business logic operating on these cost estimates. We don't want any of this in the app itself. Currently the iOS app sends the cost data, and then the server echoes that data back with populated ids (and other data). The existing cost data is deleted and the echoed response data is added to the client database on the device. This is causing us problems, because any photos might not have been sent, but the original entity tree has been removed and replaced. Obviously updating the costs tree rather than replacing it would remove this problem, but I'm not sure if there are any nice xcode libraries out there to do such things. I welcome any advice you might have.

    Read the article

  • Enum types, FlagsAttribute & Zero value – Part 2

    - by nmgomes
    In my previous post I wrote about why you should pay attention when using enum value Zero. After reading that post you are probably thinking like Benjamin Roux: Why don’t you start the enum values at 0x1? Well I could, but doing that I lose the ability to have Sync and Async mutually exclusive by design. Take a look at the following enum types: [Flags] public enum OperationMode1 { Async = 0x1, Sync = 0x2, Parent = 0x4 } [Flags] public enum OperationMode2 { Async = 0x0, Sync = 0x1, Parent = 0x2 } To achieve mutually exclusion between Sync and Async values using OperationMode1 you would have to operate both values: protected void CheckMainOperarionMode(OperationMode1 mode) { switch (mode) { case (OperationMode1.Async | OperationMode1.Sync | OperationMode1.Parent): case (OperationMode1.Async | OperationMode1.Sync): throw new InvalidOperationException("Cannot be Sync and Async simultaneous"); break; case (OperationMode1.Async | OperationMode1.Parent): case (OperationMode1.Async): break; case (OperationMode1.Sync | OperationMode1.Parent): case (OperationMode1.Sync): break; default: throw new InvalidOperationException("No default mode specified"); } } but this is a by design constraint in OperationMode2. Why? Simply because 0x0 is the neutral element for the bitwise OR operation. Knowing this singularity, replacing and simplifying the previous method, you get: protected void CheckMainOperarionMode(OperationMode2 mode) { switch (mode) { case (OperationMode2.Sync | OperationMode2.Parent): case (OperationMode2.Sync): break; case (OperationMode2.Parent): default: break; } This means that: if both Sync and Async values are specified Sync value always win (Zero is the neutral element for bitwise OR operation) if no Sync value specified, the Async method is used. Here is the final method implementation: protected void CheckMainOperarionMode(OperationMode2 mode) { if (mode & OperationMode2.Sync == OperationMode2.Sync) { } else { } } All content above prove that Async value (0x0) is useless from the arithmetic perspective, but, without it we lose readability. The following IF statements are logically equals but the first is definitely more readable: if (OperationMode2.Async | OperationMode2.Parent) { } if (OperationMode2.Parent) { } Here’s another example where you can see the benefits of 0x0 value, the default value can be used explicitly. <my:Control runat="server" Mode="Async,Parent"> <my:Control runat="server" Mode="Parent">

    Read the article

  • Mutability design patterns in Objective C and C++

    - by Mac
    Having recently done some development for iPhone, I've come to notice an interesting design pattern used a lot in the iPhone SDK, regarding object mutability. It seems the typical approach there is to define an immutable class NSFoo, and then derive from it a mutable descendant NSMutableFoo. Generally, the NSFoo class defines data members, getters and read-only operations, and the derived NSMutableFoo adds on setters and mutating operations. Being more familiar with C++, I couldn't help but notice that this seems to be a complete opposite to what I'd do when writing the same code in C++. While you certainly could take that approach, it seems to me that a more concise approach is to create a single Foo class, mark getters and read-only operations as const functions, and also implement the mutable operations and setters in the same class. You would then end up with a mutable class, but the types Foo const*, Foo const& etc all are effectively the immutable equivalent. I guess my question is, does my take on the situation make sense? I understand why Objective-C does things differently, but are there any advantages to the two-class approach in C++ that I've missed? Or am I missing the point entirely? Not an overly serious question - more for my own curiosity than anything else.

    Read the article

  • Wrapping FUSE from Go

    - by Matt Joiner
    I'm playing around with wrapping FUSE with Go. However I've come stuck with how to deal with struct fuse_operations. I can't seem to expose the operations struct by declaring type Operations C.struct_fuse_operations as the members are lower case, and my pure-Go sources would have to use C-hackery to set the members anyway. My first error in this case is "can't set getattr" in what looks to be the Go equivalent of a default copy constructor. My next attempt is to expose an interface that expects GetAttr, ReadLink etc, and then generate C.struct_fuse_operations and bind the function pointers to closures that call the given interface. This is what I've got (explanation continues after code): package fuse // #include <fuse.h> // #include <stdlib.h> import "C" import ( //"fmt" "os" "unsafe" ) type Operations interface { GetAttr(string, *os.FileInfo) int } func Main(args []string, ops Operations) int { argv := make([]*C.char, len(args) + 1) for i, s := range args { p := C.CString(s) defer C.free(unsafe.Pointer(p)) argv[i] = p } cop := new(C.struct_fuse_operations) cop.getattr = func(*C.char, *C.struct_stat) int {} argc := C.int(len(args)) return int(C.fuse_main_real(argc, &argv[0], cop, C.size_t(unsafe.Sizeof(cop)), nil)) } package main import ( "fmt" "fuse" "os" ) type CpfsOps struct { a int } func (me *CpfsOps) GetAttr(string, *os.FileInfo) int { return -1; } func main() { fmt.Println(os.Args) ops := &CpfsOps{} fmt.Println("fuse main returned", fuse.Main(os.Args, ops)) } This gives the following error: fuse.go:21[fuse.cgo1.go:23]: cannot use func literal (type func(*_Ctype_char, *_Ctype_struct_stat) int) as type *[0]uint8 in assignment I'm not sure what to pass to these members of C.struct_fuse_operations, and I've seen mention in a few places it's not possible to call from C back into Go code. If it is possible, what should I do? How can I provide the "default" values for interface functions that acts as though the corresponding C.struct_fuse_operations member is set to NULL?

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

  • Service reference not generating client types

    - by Cranialsurge
    I am trying to consume a WCF service in a class library by adding a service reference to it. In one of the class libraries it gets consumed properly and I can access the client types in order to generate a proxy off of them. However in my second class library (or even in a console test app), when i add the same service reference, it only exposes the types that are involved in the contract operations and not the client type for me to generate a proxy against. e.g. Endpoint has 2 services exposed - ISvc1 and ISvc2. When I add a service reference to this endpoint in the first class library I get ISvc1Client andf ISvc2Client to generate proxies off of in order to use the operations exposed via those 2 contracts. In addition to these clients the service reference also exposes the types involved in the operations like (type 1, type 2 etc.) this is what I need. However when i try to add a service reference to the same endpoing in another console application or class library only Type 1, Type 2 etc. are exposed and not ISvc1Client and ISvc2Client because of which I cannot generate a proxy to access the operations I need. I am unable to determine why the service reference gets properly generated in one class library but not in the other or the test console app.

    Read the article

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >