Search Results

Search found 26869 results on 1075 pages for 'library design'.

Page 809/1075 | < Previous Page | 805 806 807 808 809 810 811 812 813 814 815 816  | Next Page >

  • Can't play Steel Storm, Burning Retribution

    - by Goytor
    I've bougth Steel Storm, Burning Retribution in the Software Center, and every time I run it shows the following message: You have reached this menu due to missing or unlocable content/data You may consider adding -base dir /path/to/game to your launch commandline I've gone to main menu in the preferences tab and changed the launcher to no avail. I've tried running it from console, with /opt/steelstorm-episode2/steelstorm, I got: Game is Steel-Storm using base gamedir gamedata Steel-Storm Linux 01:07:07 Jun 11 2011 - release Playing shareware version. Skeletal animation uses SSE code path DPSOFTRAST available (SSE2 instructions detected) Failed to init SDL joystick subsystem: couldn't exec quake.rc couldn't exec default.cfg execing config.cfg couldn't exec autoexec.cfg Client using an automatically assigned port Client opened a socket on address 0.0.0.0:0 Client opened a socket on address [0:0:0:0:0:0:0:0]:0 Linked against SDL version 1.2.12 Using SDL library version 1.2.14 GL_VENDOR: NVIDIA Corporation GL_RENDERER: GeForce 6150SE nForce 430/PCI/SSE2/3DNOW! GL_VERSION: 2.1.2 NVIDIA 270.41.06 vid.support.arb_multisample 1 vid.mode.samples 0 vid.support.gl20shaders 1 Video Mode: fullscreen 640x480x32x0.00hz S_Startup: initializing sound output format: 48000Hz, 16 bit, 2 channels... Wanted audio Specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 2048 Obtained audio specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 1024 Sound format: 48000Hz, 2 channels, 16 bits per sample CDAudio_Init: No CD in player. Can't get initial CD volume CD Audio Initialized If I try -base /opt/steelstorm-episode2/steelstorm says "command not found".

    Read the article

  • Getting iTunes to play third party AAC files

    - by Redmastif
    I have a library filled with some old MP3 files and I'm in the process of changing them all to AAC for the better sound quality. Obviously I can't just create AAC versions of the files I already have because they would sound worse (lossy compression to converted to more lossy compression), so I'm going to their source and downloading them in a lossless form and using a third party to make them into AAC. Apparently iTunes will not handle AAC files that aren't made with iTunes. Is there a way around this? I've looked at third party programs and would be willing to use them, but since they all require the iTunes/iPod/iEverything driver, I don't know if they would still prevent my files or not. Also before you jump on my back about pirating, these files are from old CDs that I lost years ago. I paid for them. Thanks.

    Read the article

  • Major permission repair needed on Mac Os

    - by Luke1111
    I made the fatal error of copying and pasting a sudo command into my terminal without double checking it, here it is. sudo -R mysql / What this does (for those that don't know) is recursively change the owner every file from the root down to mysql!! obviously not what i was intending This has of course played havoc with my system, the first thing i did was the apple permission repair but that only works for files that it has an idea of though it has changed a lot of file ownerships back to root. It seems that many library files are still owned incorrectly, as a lot of problems don't work. What i propose doing as a temporary fix until i can reinstall mountain lion is to recursively set all ownerships that are mysql to Luke. I'm not sure what they should precisely but this is still better than nothing. Is this possible using a shell script? I realise that this won't fix the problem properly and i will have to reformat but i need the machine in a workable state just for this week.

    Read the article

  • Advanced PHP book [closed]

    - by Aaditi Sharma
    I've gone and stumbled across a lot of recommendations for PHP books, including on SO, however could not find a reasonable & convincible answer for this. Is there a really good advanced book for PHP. Background: I've done almost 8 months in PHP. I know the basics. I go through php.net very often. I've played around with Codeigniter, amongst other frameworks. I've been doing JavaScript for almost 2 years, and specifically thank Douglas Crockford for this, I completely changed the way I code JavaScript. I spend a lot of time travelling, and would love to read a book about PHP, that includes the awesome parts and even when something doesn't quite work in PHP. (As a note a lot of previous answers on SO and programmers give varied results.) I have to place an order through a library which has it's limitations. One book that some of experienced PHP programmers could recommend would be helpful. I have gone through http://stackoverflow.com/questions/1711/what-is-the-single-most-influential-book-every-programmer-should-read and http://stackoverflow.com/questions/194812/list-of-freely-available-programming-books, which do NOT have books related to PHP.

    Read the article

  • Another Exchange 2003 to Exchange 2010 mail flow issue

    - by Ryan Roussel
    During a migration recently, we came across another internal mail routing issue.  The symptoms were identical to my previous post about Exchange internal mail routing.  Mail was flowing from 2010 to 2003, from 2010 to the internet, but not from 2003 to 2010.   I went through the normal check list looking at permissions, DNS, and the routing group connectors.  I verified that both servers listed in the routing group connectors were the routing master in their respective routing groups through the 2003 ESM.  I also verified that inheritable permissions were enabled for the Exchange 2003 server object in the schema.  No luck with either.   For my previous post about this issue in which inheritable permissions were the culprit: Exchange 2010, Exchange 2003 Mail Flow issue   And for Routing Group issues: Exchange 2007 Routing Group Connector Mayhem   I finally enabled logging on the SMTP virtual server on Exchange 2003 and the Default Receive Connector on 2010 and sent a few test e-mails where I found 2003 was having issues authenticating to 2010.  By default 2003 uses Exchange Server Authentication to communicate to 2010. The exact error was: 4.7.0 Temporary Authentication Failure which was found in the SMTP logs on the Exchange 2003 side   After scouring based on this error, I found the solution:   The Access this computer from the network user rights in the local computer policy on the Exchange 2010 server were changed from the default.  The network administrator had modified the Default Domain policy and changed this user right assignment to only list Domain Users.   The fix was to clear this setting in the Default Domain policy,  force gpupdate to refresh the group policy settings, then ensure the appropriate users and groups were listed.   This immediately fixed the problem and the Exchange 2003 server was able to route mail to the Exchange 2010 mailboxes.   The default user rights assignments for Access this computer from the network On Workstations and Servers: Administrators Backup Operators Power Users Users Everyone On Domain Controllers: Administrators Authenticated Users Everyone More can be found here: http://technet.microsoft.com/en-us/library/cc740196(WS.10).aspx

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • SEO for maps-based websites that require user interaction

    - by j0nes
    I have a website that basically shows a lot of locations worldwide on a Google Maps like interface. The map itself is built using the Leaflet library and Open Street Map tiles. In the map, I show markers at each location I have. There is a popup window when I click on a marker that shows additional information and contains links to "detail" pages for this location. I fetch the location data for the viewpoint from an AJAX call from my server, so the additional information is not available in the HTML page itself. The detail pages are the pages my users are interested in. My normal users load the map, search the location they are interested in, click on a marker and click on a link in the popup window. However for search engines, this might look different. As this navigation pattern relies on user interaction, I think they might not be able to find the details page. My questions: Are search engines able to follow a navigation path like outlined above? How can I improve the navigation for search engines? (For example showing textual links below the map, sitemaps...) How important are internal links for SEO?

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-05

    - by Bob Rhubart
    Why is enterprise software often so complicated? | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja shares "a few examples of requirements that lead to creation of complex platform infrastructures that up the complex enterprise software." Educause Top-Ten IT Issues - the most change in a decade or more | Cole Clark blogs.oracle.com Cole Clark discusses why "higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia." Oracle VM RAC template - what it took | Wim Coekaerts blogs.oracle.com Wim Coekaerts shares an example that shows how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. Oracle Cloud and Oracle Platinum Services Announcements oracle.com Featuring Larry Ellison and Mark Hurd. Wednesday, June 06, 2012. 1:00 p.m. PT – 2:30 p.m. PT Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design | Mark Rittman www.rittmanmead.com Oracle ACE Director Mark Rittman launches a new series that dives into "the various stages in building a simple Oracle Endeca Information Discovery application, using the recent Endeca Information Discovery 2.3 release." Introducing Decision Tables in the SOA Suite 11g | Lucas Jellama technology.amis.nl Oracle ACE Director Lucas Jellema demonstrates how "the decision table can be put to good use to implement the business logic behind the classical game of Rock, Paper and Scissors." Application integration: reorganise, recycle, repurpose | Andrew Clarke radiofreetooting.blogspot.com "Integration is a topic which is in everybody's baliwick," says Oracle ACE Andrew Clarke. "The business people want to get the best value from their existing IT investments. The architects need to understand the interfaces between the silos and across the layers. The developers have to implement it." Using XA Transactions in Coherence-based Applications | Jonathan Purdy blogs.oracle.com Purdy shares "a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager." Thought for the Day "The difficulty lies, not in the new ideas, but in escaping from the old ones..." — John Maynard Keynes (June 5, 1883 - April 4, 1946) Source: Quotations Page

    Read the article

  • Git for Application Settings

    - by devians
    I use a lot of tools at work and at home, and im constantly tweaking them in one location or the other. It's somewhat common practice for people to use Git to version their .vim, .vimrc, and other . files, since you can host your config files on github and have the share-ability and all the other advantages that implies. Being able to version and branch my configs sounds like a grand idea, since I'm always messing about with them. I'd like to discuss the best practice for doing this on a slightly wider scope. How would you implement it? Have your configfiles repo in ~/Library/Configs or similar, and symlink the appropriate files? How to handle preference files for Applications, ie iTerm2. These files are recreated every time, so you'd have to symlink 'backwards' and put a link in the repo? rather than symlinking to the repo, since it would just delete the symlink.

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

  • iTunes home sharing problem

    - by Trev
    I have two iMacs running Snow Leopard. I have iTunes home sharing setup on the two of them using the same iTunes account credentials. Up until last night all worked fine... I could download an app on one iMac, then go to the other iMac and drag that app from the home sharing into that iMac's applications. Then all of a sudden it stopped working. I can see and browse the home sharing, however, when I go to drag an app or song from the share to the local library it gives me an error stating that I am unable to do so... when I click OK to this message I lose visibility to the home share. I have checked and this iMac is authorized with my iTunes account and I have even deauthorized and reauthorized but still the same result. Does anyone have any suggestions?

    Read the article

  • Incident Management-Monitoring Ideas

    - by sprsr
    Hello all, What we are tring to do at our company (banking industry) is to apply some ITIL (Information Technology Infrastructure Library) principles and I need some ideas to develop our incident management system of our company. For those who have experienced with incident management, what are the things that helps you most ? What are the things that you can't live without while managing the incidents. Do you have some good screenshots of such a monitoring software ? Since we choosed to develop our own system instead of buying a big system, there are lots of things we may miss, and we are brainstorming here. I need some key points that most crucial in incident management and monitoring. Thanks.

    Read the article

  • Make sniqt recognize all tray abilities (or create a working indicator in Qt)

    - by hakermania
    There is this old thread of mine: How do I create a working indicator with Qt/C++? where I was suggested to use the QSystemTray library for making a tray icon in Ubuntu for my application. Sniqt is a program that takes care of the rest. As known, Ubuntu has got rid of tray icons. Instead, it now uses indicators and only indicators. Sniqt converts the Qt tray icons into working indicators. The problem is that it doesn't do a very decent convertion. Actions like single click, middle click etc do not work, while they do in systems that support tray icons. Is there a way to have these actions back? Can I use QSystemTray icon and still have these interesting (and very helpful, in my occasion) actions in Ubuntu? I would be glad to know the answer to the other thread I talked about earlier (how to make a working indicator using the GTK libraries and prevent the crash), as well. Link for Sniqt bug: https://bugs.launchpad.net/sni-qt/+bug/1027652

    Read the article

  • Transition from 2D to 3D Game development [closed]

    - by jakebird451
    I have been working in the 2D world for a long time from manual blitting in windows to SDL to Python (pygame, pyopengl) and a bunch in between. Needless to say I have been programming for a while. So a while ago I started to program in OpenGL via C++ on my Mac. I then got a little intricate with my work after a while (3D models with skeleton structure and terrain development). After a long time of tinkering, I stopped due to the heavy work just to yield a low level understanding of how OpenGL works. Still interested in Graphics and Game Development I went on a search for a stable game engine with some features to grow on. Licence Requirement: Anything other than GPL (LGPL will do) OS Requirement: Mac & Windows Shader: GLSL or CG (GLSL preferred due to experience) Models: Any model structure with rigging (bone) support & animation I am looking at http://www.ogre3d.org/ currently and am starting to meddle around with some examples. However I am a little reluctant to spend a lot of time on it only to yield another dead end. So instead of falling down a spiraling black pit, I am posting my question to you guys to lead me in the right direction based on my requirements. How was your experience with the engine you recommend? Is it well documented? Does it have well documented examples? Any library requirements (Boost, libpng, etc)?

    Read the article

  • Strategy for versioning on a public repo

    - by biril
    Suppose I'm developing a (javascript) library which is hosted on a public repo (e.g. github). My aim in terms of how version numbers are assigned and incremented is to follow the guidelines of semantic versioning. Now, there's a number of files in my project which compose the actual lib and a number of files that 'support it', the latter being docs, a test suite, etc. My perspective this far has been that version numbers should only apply to the actual lib - not the project as a whole - since the lib alone is 'the unit' that defines the public API. However I'm not satisfied with this approach as, for example, a fix in the test suite constitutes an 'improvement' in my project, which will not be reflected in the version number (or the docs which contain a reference to it). On a more practical level, various tools, such as package managers, may (understandably) not play along with this strategy. For example, when trying to publish a change which is not reflected in the version number, npm publish fails with the suggestion "Bump the 'version' field set the --force flag, or npm unpublish". Am I doing it wrong?

    Read the article

  • How do i get Safari to ignore the SSL Certificate error?

    - by Tangopop
    In IE 6, 7, 8 and Firefox 3.6.3 and 3.0.5 i have installed a local SSL Certificate on the machine i am testing on and i have gotten the browser to igonre the SSL error (which is off one of my Web Test servers) Now i am tryin to do the same thing within safari 4 and with no luck. Basically i am running some automated scripts to test my website before they go live and i need to be able to ignore these errors as they will all run autonomosly. This is the error screen i am trying to avoid: http://library.bowdoin.edu/news/images/ezproxy-err/safari.jpg As i say i have installed the certificate locally and the IE 7 browser on the same machine works fine.

    Read the article

  • Additional new material WebLogic Community 2013

    - by JuergenKress
    Load Balancing T3 Initial Context Retrieval for WebLogic using Oracle Traffic Director Demystifying WebLogic and Fusion Middleware Management WebLogic Server- Integrated & Optimized w/ Best of Breed Oracle Offerings to Turbo Charge your Applications Get a Bird’s-Eye View of IT Architecture: IT Strategies from Oracle IT Strategies from Oracle, a complimentary authorized library of guidelines and reference architectures, can help you put together a strong IT architecture that takes into account individual technology components as well as big-picture IT concepts and strategies. Read More. Deploying Oracle Application Development Framework Applications on Oracle Java Cloud Service and Oracle Database Cloud Service With the new Oracle Cloud environment you no longer have to maintain an Oracle WebLogic server or a database server of your own – you can instead use instances hosted on Oracle Cloud. More Oracle Application Development Framework Development with Eclipse Oracle Enterprise Pack for Eclipse now provides even more Oracle Application Development Framework tooling with each release. Check out this new tutorial on Oracle Enterprise Pack for Eclipse 12.1.1.2. Oracle WebLogic Devcast Series Join us for the March 28 Oracle WebLogic Devcast Webcast, “What to Expect from Maven on Oracle WebLogic,” featuring Pyounguk Cho, Oracle’s principal product manager. Learn what developers can expect when utilizing Apache Maven with Oracle WebLogic. Customer Webcasts: WebLogic Devcast Series – Register Leveraging Third-Party Libraries to Create and Deploy Applications to Oracle Cloud Oracle ADF: Tuning Application Module Pools and Connection Pools WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Mass editing videos on Ubuntu?

    - by rick
    Hi, I'm trying to add a watermark and a credits image to all of my old videos. I downloaded them off YouTube so they are all flv (H.264?). Is there some software that will allow me do simple edits in batches? I know a little bit of Python and tried looking at some of the library but they all seem like overkill (and way above my head). So is there a solution besides getting some software and going through all my videos and doing it manually? They are all mostly the same length, but it would be nice to specify a relative position for my credits. e.g. show a static image for 10 seconds when the video is at 95%

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Should accessible members of an internal class be internal too?

    - by Jeff Mercado
    I'm designing a set of APIs for some applications I'm working on. I want to keep the code style consistent in all the classes I write but I've found that there are a few inconsistencies that I'm introducing and I don't know what the best way to resolve them is. My example here is specific to C# but this would apply to any language with similar mechanisms. There are a few classes that I need for implementation purposes that I don't necessarily want to expose in the API so I make them internal whereever needed. Generally what I would do is design the class as I normally would (e.g., make members public/protected/private where necessary) and change the visibility level of the class itself to internal. So I might have a few classes that look like this: internal interface IMyItem { ItemSet AddTo(ItemSet set); } internal class _SmallItem : IMyItem { private readonly /* parameters */; public _SmallItem(/* small item parameters */) { /* ... */ } public ItemSet AddTo(ItemSet set) { /* ... */ } } internal abstract class _CompositeItem: IMyItem { private readonly /* parameters */; public _CompositeItem(/* composite item parameters */) { /* ... */ } public abstract object UsefulInformation { get; } protected void HelperMethod(/* parameters */) { /* ... */ } } internal class _BigItem : _CompositeItem { private readonly /* parameters */; public _BigItem(/* big item parameters */) { /* ... */ } public override object UsefulInformation { get { /* ... */ } } public ItemSet AddTo(ItemSet set) { /* ... */ } } In another generated class (part of a parser/scanner), there is a structure that contains fields for all possible values it can represent. The class generated is internal too but I have control over the visibility of the members and decided to make them internal as well. internal partial struct ValueType { internal string String; internal ItemSet ItemSet; internal IMyItem MyItem; } internal class TokenValue { internal static int EQ(ItemSetScanner scanner) { /* ... */ } internal static int NAME(ItemSetScanner scanner, string value) { /* ... */ } internal static int VALUE(ItemSetScanner scanner, string value) { /* ... */ } //... } To me, this feels odd because the first set of classes, I didn't necessarily have to make some members public, they very well could have been made internal. internal members of an internal type can only be accessed internally anyway so why make them public? I just don't like the idea that the way I write my classes has to change drastically (i.e., change all uses of public to internal) just because the class is internal. Any thoughts on what I should do here? It makes sense to me that I might want to make some members of a class declared public, internal. But it's less clear to me when the class is declared internal.

    Read the article

  • Media center consumes all available memory when attempting to play music off of a server

    - by RCIX
    I have Windows 7 Ultimate, and recently, when i try to play a song off of my Twonky Media Server/Windows Media Connect (based on an HP WHS with an Atom), it plays choppily. When i open Resource Monitor, it shows that after ordering the music to play, memory usage rapidly spikes to consume most, if not all, of the available memory on my system (excluding a couple hundred megabytes in standby). Why does it do this and is there anything i can do to stop it? Edit: it happens when I attempt to browse the server's music, not just when i play music. Edit 2: the "ehshell" process is what consumes the memory, appears to me something specific to media center. Moreover, the ehshell process doesn't die in this case. Edit 3: It only happens when browsing my Twonky library, and not my Windows Media Connect.

    Read the article

  • Problem with missing JSON functions on PHP 5.2.6 / Plesk 8.4

    - by Drachenviech
    I have a vserver running openSuse 10.3, Apache 2 and Plesk 8.4. I can update/upgrade neither, as it is apparently not recommended to upgrade openSuse 10.3 (and an update to the EOL 10.4 does not seem to make much sense) and Plesk fails to update no matter what version I try (even fails to upgrade to 8.4.1). Still I can live with that somehow, primarily because I don’t have the time to do a fresh remote install on the vserver. What really is a problem is, that though the installed PHP is 5.2.6 it has no zip library and no json functions. The first is probably because PHP was not compiled with --enable-zip. The second is a big mystery though. As I understand it, it always comes with PHP unless its compiled with the --disable-json configure option. This is however not the case. And the json extension module is just not there. I even tried to enable it with extension=json.so with no luck either. the configure options of my PHP are (as shipped with Plesk 8.4) '../configure' '--prefix=/usr' '--datadir=/usr/share/php5' '--mandir=/usr/share/man' '--bindir=/usr/bin' '--with-libdir=lib' '--includedir=/usr/include' '--sysconfdir=/etc/php5/apache2' '--with-config-file-path=/etc/php5/apache2' '--with-config-file-scan-dir=/etc/php5/conf.d' '--enable-libxml' '--enable-session' '--with-mm' '--with-pcre-regex=/usr' '--enable-xml' '--enable-simplexml' '--enable-spl' '--enable-filter' '--disable-debug' '--enable-inline-optimization' '--disable-rpath' '--disable-static' '--enable-shared' '--program-suffix=5' '--with-pic' '--with-gnu-ld' '--with-system-tzdata=/usr/share/zoneinfo' '--with-apxs2=/usr/sbin/apxs2' '--disable-all' '--disable-cli' As I understand it, PECL is not an option with 5.2.6. Or am I mistaken? Even if I was not, the openSuse repository only goes as far as PHP 5.2.4. The openSuse install even came without zypper, which I had to manually install. So is there a way to get ziplib and json running in PHP 5.2.6 without having to recompile the binary?

    Read the article

  • Opengl + SDL linking error

    - by me2loveit2
    I am trying to load an image as a texture with opengl using c++ in visual studio 2010. I researched a couple hours online and found the SDL library, then I implemented a simple example and got some linking error I can not seem to figure out. The error log is here: 1Build started 10/20/2012 12:09:17 AM. 1InitializeBuildStatus: 1 Touching "Debug\texture mapping test.unsuccessfulbuild". 1ClCompile: 1 All outputs are up-to-date. 1 texture mapping test.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1texture mapping test.obj : error LNK2019: unresolved external symbol _IMG_Load referenced in function "void __cdecl display(void)" (?display@@YAXXZ) 1MSVCRTD.lib(crtexe.obj) : error LNK2019: unresolved external symbol main referenced in function __tmainCRTStartup 1C:\Users\Me\Documents\Visual Studio 2010\Projects\Programming projects\texture mapping test\Debug\texture mapping test.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:02.45 ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Can someone please help me!! I am at a desperate point right now. I downloaded the SDL, and copied all the .h file into: C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Include I added the .lib (x86) files into://as a not i tried the (x64) file too but got the exact same error C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Lib and the .dll(x86) into: C:\Windows\System32 For implementing textures, I used the simple sample code from: http://www.sdltutorials.com/sdl-tip-sdl-surface-to-opengl-texture Please let me know if you can see me doing something wrong, or know how I can fix this!! Thanks Phil

    Read the article

  • MSExchangeMailSubmission service is not listed or running; sent mail not working

    - by InterMurph
    I am running a new Exchange 2007 SP1 system. I moved the mailboxes from the old Exchange 2003 server, and incoming mail is working. But outgoing mail is not working at all; even inside my domain. Lots of debugging and searching lead me to believe that the problem is that the "Microsoft Exchange Mail Submission Service" (AKA MSExchangeMailSubmission) is not running. In fact, it's not even listed in the Services list. This document (http://technet.microsoft.com/en-us/library/aa998342.aspx) says that this service is installed by the Mailbox server role. My server is running the Mailbox role, as well as the Hub Transport and Client Access roles. How do I get this service to show up in the list, so that I can start it up? Thanks.

    Read the article

< Previous Page | 805 806 807 808 809 810 811 812 813 814 815 816  | Next Page >