Search Results

Search found 36521 results on 1461 pages for 'aq advanced queue oracle support streams propagation schedule dblink troubleshoo'.

Page 653/1461 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • wynapse.com down last night, SC tonight

    - by Dave Campbell
    In an industry segment that nobody is ever 'asleep', I suppose no time is a good time to take SQL Server down for upgrades, and I had forgotten that my host was going to be doing that. Last night about 9pm (Arizona), in the middle of working on a blog post, things started going wonky and I finally realized everything was ok except for SQL Server. I turned in a ticket on it and was reminded about the maintenance schedule... guess I file those away in memory and just assume they'll happen while I'm asleep :) So, looking at the schedule, it appears that SQL Server for SilverlightCream is going to be down tonight. Minimum is 9-12pm Arizona time... mileage and time may vary. Since all the posts are run through SC to get the Skim count, having SQL down sucks, but I'd rather we got maintenance than have to react to a crash because of something that wasn't maintained. I'll try to get the next 'Cream post out early so that the bulk of folks can dig through it prior to the outage. Meanwhile, for those of you in Phoenix, tonight is our Phoenix Silverlight User Group April meeting, and Joel Neubeck is going to be giving us the run-through on Windows Phone 7! We're not as advanced as those MVP rock-stars in California like Victor Gaudioso who streams his user group meeting, so you'll just have to show up for the goodness! And for anyone that's interested, here's some WP7 bling for your desktop... I want some of this real bad for my laptop! Get the full image in the post by Ozymandias:

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Distribute Sort Sample Service

    - by kaleidoscope
    How it works? Using the front-end of the service, a user can specify a size in MB for the input data set to sort. Algorithm CreateAndSplit The CreateAndSplit task generates the input data and stores them as 10 blobs in the utility storage. The URLs to these blobs are packaged as Separate work items and written to the queue. · Separate The Separate task reads the blobs with the random numbers created in the CreateAndSplit task and places the random numbers into buckets. The interval of the numbers that go into one bucket is chosen so that the expected amount of numbers (assuming a uniform distribution of the numbers in the original data set) is around 100 kB. Each bucket is represented as a blob container in utility storage. Whenever there are 10 blobs in one bucket (i.e., the placement in this bucket is complete because we had 10 original splits), the separate task will generate a new Sort task and write the task into the queue. · Sort The Sort task merges all blobs in a single bucket and sorts them using a standard sort algorithm. The result is stored as a blob in utility storage. · Concat The concat task merges the results of all Sort tasks into a single blob. This blob can be downloaded as a text file using this Web page. As the resulting file is presented in text format, the size of the file is likely to be larger than the specified input file. Anish

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • NoVa Code Camp 2010.1 &ndash; Don&rsquo;t Miss It!

    - by John Blumenauer
    Tomorrow, June 12th will be the NoVa Code Camp 2010.1 held at the Microsoft Technical Center in Reston, VA.  What’s in store?  Lots of great topics by some truly knowledgeable speakers from the mid-Atlantic region.  This event will have four talks alone on Azure, plus sessions ASP.NET MVC2, SharePoint, WP7, Silverlight, MEF, WCF and some great presentations centered around best practices and design. The schedule can be found at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Schedule/tabid/202/Default.aspx The session descriptions and speaker list is at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Sessions/tabid/197/Default.aspx We’re also fortunate this year to have several excellent sponsors.  The sponsor list can be found at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Sponsors/tabid/198/Default.aspx.  As a result of the excellent sponsors, attendees will be enjoying nice food throughout the day and the end of day raffle will have some great surprises regarding swag! I’ll be presenting MEF with an introduction and then how it can be used to extend Silverlight applications.  If you’re new to MEF and/or Silverlight, don’t worry.  I’ll be easing into the concepts so everyone will leave an understanding of MEF by the end of the session.   Don’t miss NoVa Code Camp 2010.1.  See YOU there!

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • What is the difference between Callback<T> and Java 8's Supplier<T>?

    - by Dan Pantry
    I've been switching over to Java from C# after some recommendations from some over at CodeReview. So, when I was looking into LWJGL, one thing I remembered was that every call to Display must be executed on the same thread that the Display.create() method was invoked on. Remembering this, I whipped up a class that looks a bit like this. public class LwjglDisplayWindow implements DisplayWindow { private final static int TargetFramesPerSecond = 60; private final Scheduler _scheduler; public LwjglDisplayWindow(Scheduler displayScheduler, DisplayMode displayMode) throws LWJGLException { _scheduler = displayScheduler; Display.setDisplayMode(displayMode); Display.create(); } public void dispose() { Display.destroy(); } @Override public int getTargetFramesPerSecond() { return TargetFramesPerSecond; } @Override public Future<Boolean> isClosed() { return _scheduler.schedule(() -> Display.isCloseRequested()); } } While writing this class you'll notice that I created a method called isClosed() that returns a Future<Boolean>. This dispatches a function to my Scheduler interface (which is nothing more than a wrapper around an ScheduledExecutorService. While writing the schedule method on the Scheduler I noticed that I could either use a Supplier<T> argument or a Callable<T> argument to represent the function that is passed in. ScheduledExecutorService didn't contain an override for Supplier<T> but I noticed that the lambda expression () -> Display.isCloseRequested() is actually type compatible with both Callable<bool> and Supplier<bool>. My question is, is there a difference between those two, semantically or otherwise - and if so, what is it, so I can adhere to it?

    Read the article

  • Hack a Linksys Router into a Ambient Data Monitor

    - by Jason Fitzpatrick
    If you have a data source (like a weather report, bus schedule, or other changing data set) you can pull it and display it with an ambient data monitor; this fun build combines a hacked Linksys router and a modified toy bus to display transit arrival times. John Graham-Cumming wanted to keep an eye on the current bus arrival time tables without constantly visiting the web site to check them. His workaround turns a hacked Linksys router, a display, a modified London city bus (you could hack apart a more project-specific enclosure, of course), and a simple bit code that polls the bus schedule’s API, into a cool ambient data monitor that displays the arrival time, in minutes, of the next two buses that will pass by his stop. The whole thing could easily be adapted to another API to display anything from stock prices to weather temps. Hit up the link below for more information on the project. Ambient Bus Arrival Monitor Hacked from Linksys Router [via Make] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • OpenGL ES Loading

    - by kuroutadori
    I want to know what is the norm of loading rendering code. Take a button. When the application is loaded, a texture is loaded which has the image of the button on it. When the button is tapped, it then adds a loader into a queue, which is loaded on render thread. It then loads up an array buffer with vertexes and tex coords when render is called. It then adds to a render tree. Then it renders. the render function looks like this void render() { update(); mBaseRenderer->render(); } update() is when the queue is checked to see if anything needs loading. mBaseRenderer->render() is the render tree. What I am asking then is, should I even have the update() there at all and instead have everything preloaded before it renders? If I can have it loaded when need, for instance when there is tap, then how can it be done (My current code causes an dequeueing buffer error (Unknown error: -75) which I assume is to do with OpenGL ES and the context)?

    Read the article

  • How should I host our scalable worker processes?

    - by Pieter Breed
    We are designing a new architecture for an enterprise business. The principles we've followed so far is not to develop what you can (possible buy and) deploy, ie, don't reinvent any wheels. In this way we've decided on CQRS, RabbitMQ, Riak and a bunch of other things. We still need to write /some/ business code though and these will be in the form of worker processes, which will consume commands from a message queue and after any side-effects, produce events onto another message queue. The idea behind this is that via the competing-consumers design we will have a scalable design right out of the box. One option is of writing a management infrastructure that will know how to: deploy code instantiate processes kill processes update configuration etc IE provide fault tolerance and scalability. Also, this is exactly what something like GAE and Heroku does for you, but in a public setting and in our organization, public is bad. My question is, is there an out-of-the-box solution that we can use to host our consumers in? Like a private cloud or private platform-as-a-service. Private Heroku or GAE. Is there some kind of software or software product with which we can do all of these things and thereby get scalability and fault tolerance over our consumers?

    Read the article

  • Site Web Analytics not updating Sharepoint 2010

    - by Rohit Gupta
    If you facing the issue that the web Analytics Reports in SharePoint 2010 Central Administration is not updating data. When you go to your site > site settings > Site Web Analytics reports or Site Collection Analytics reports  You get old data as in the ribbon displayed "Data Last Updated: 12/13/2010 2:00:20 AM" Please insure that the following things are covered: Insure that Usage and Data Health Data Collection service is configured correctly. Log Collection Schedule is configured correctly Microsoft Sharepoint Foundation Usage Data Import and Microsoft SharePoint Foundation Usage Data Processing Timer jobs are configured to run at regular intervals One last important Timer job is the Web Analytics Trigger Workflows Timer Job insure that this timer job is enabled and scheduled to run at regular intervals (for each site that you need analytics for). After you have insured that the web analytics service configuration is working fine and the Usage Data Import job is importing the *.usage files from the ULS LOGS folder into the WSS_Logging database, and that all the required timer jobs are running as expected… wait for a day for the report to get updated… the report gets updated automatically at 2:00 am in the morning… and i could not find a way to control the schedule for this report update job. So be sure to wait for a day before giving up :)

    Read the article

  • print jobs are held until the VirtualBox guest OS is reboot

    - by broiyan
    Here is the setup: VirtualBox 4.1.20 (which the Help window describes as 4.1.12_Ubuntu) Extension Pack 4.1.20 (for USB support) Windows 7 Home Premium as a guest operating system on VirtualBox Ubuntu 12.04 with dist-upgrade's to September 2012 as the host operating system. Fuji Xerox DocuPrint P205b, which I believe is a GDI printer, connected via USB. The problem is that often print jobs will sit in the print queue and nothing comes out of the printer. The printer status for the first item in the queue will be Printing even though nothing happens. Then upon rebooting Windows, the print jobs get printed, seemingly simultaneous to the rebooting process; that is as Windows reloads. One way to avoid this problem is to reboot Windows with the printer cable attached, and then submit the print jobs. The print jobs get printed in a timely manner. Perhaps VirtualBox has a problem with USB being plug-n-play and hot pluggable. It's not convenient to have the printer plugged in when Windows boots because: One, this is a laptop, and Two, I may be boot Windows for a purpose other than printing and not anticipate needing to print. Are there any recommendable fixes for this problem?

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • #altnetseattle - Kanban

    - by GeekAgilistMercenary
    The two main concepts of Kanban is to keep the queues minimum and to maintain visibility. Management/leadership needs to make sure the Kanban Queue doesn’t get starved.  This is key and also very challenging, being the queue needs to be minimal but also can’t get too small during the course of work.  This is to maintain maximum velocity. Phases of the Kanban need to be kept flowing too, bottlenecks need removed ASAP when brought up. Victory Wall – I dig that idea.  Somewhere to look to see the success of the team. The POs work in Rally or other tools for some client management, but it causes issues with the lack of "visibility" – a key fundamental ideal & part of Kanban. One of the big issues is fitting things into a sprint, when Kanban is used with Scrum, but longer sprints are wasteful. Kanban work sizes are of a set size. At this point I got a bit side tracked by the actual conversation and missed out on note taking.  Overall, people doing Kanban and Lean Style Software Development I would say are some of the happiest coders around.  The clean focus, good velocity, sizing, and other approaches that are inferred by Kanban help developers be the rock stars and succeed. This is definitely a topic I will be commenting on a lot more in the near future.

    Read the article

  • unable to send mail from postfix on Ubuntu 12.04

    - by gilmad
    I'm trying to send an email through Google from my localhost. (via PHP5.3) But Google keeps on blocking my requests. I tried to follow the solutions given to a few similar questions, but for some reason they do not work. I followed these instructions to configure it - http://www.dnsexit.com/support/mailrelay/postfix.html Now for the config data: my main.cf file looks like that: relayhost = [smtp.gmail.com]:587 smtp_fallback_relay = [relay.google.com] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = my sasl_passwd looks like that: [smtp.gmail.com]:587 [email protected]:password and that is how the mail.log rows look like: Dec 14 10:24:50 COMP-NAME postfix/pickup[5185]: 1C3987E0EDD: uid=33 from= Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: 1C3987E0EDD: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: from=, size=483, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/smtp[5501]: 1C3987E0EDD: to=, relay=smtp.gmail.com[173.194.70.109]:587, delay=0.61, delays=0.19/0/0.32/0.1, dsn=5.7.0, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530 5.7.0 Must issue a STARTTLS command first. w3sm8024250eel.17 (in reply to MAIL FROM command)) Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: C20677E0EDE: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/bounce[5502]: 1C3987E0EDD: sender non-delivery notification: C20677E0EDE Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: C20677E0EDE: from=<, size=2532, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: removed

    Read the article

  • Android 2D terrain scrolling

    - by Nikola Ninkovic
    I want to make infinite 2D terrain based on my algorithm.Then I want to move it along Y axis (to the left) This is how I did it : public class Terrain { Queue<Integer> _bottom; Paint _paint; Bitmap _texture; Point _screen; int _numberOfColumns = 100; int _columnWidth = 20; public Terrain(int screenWidth, int screenHeight, Bitmap texture) { _bottom = new LinkedList<Integer>(); _screen = new Point(screenWidth, screenHeight); _numberOfColumns = screenWidth / 6; _columnWidth = screenWidth / _numberOfColumns; for(int i=0;i<=_numberOfColumns;i++) { // Generate terrain point and put it into _bottom queue } _paint = new Paint(); _paint.setStyle(Paint.Style.FILL); _paint.setShader(new BitmapShader(texture, Shader.TileMode.REPEAT, Shader.TileMode.REPEAT)); } public void update() { _bottom.remove(); // Algorithm calculates next point _bottom.add(nextPoint); } public void draw(Canvas canvas) { Iterator<Integer> i = _bottom.iterator(); int counter = 0; Path path = new Path(); path.moveTo(0, _screen.y); while (i.hasNext()) { path.lineTo(counter, _screen.y-i.next()); counter += _columnWidth; } path.lineTo(_screen.x, _screen.y); path.lineTo(0, _screen.y); canvas.drawPath(path2, _paint); } } The problem is that the game is too 'fast', so I tried with pausing thread with Thread.sleep(50); in run() method of my game thread but then it looks too torn. Well, is there any way to slow down drawing of my terrain ?

    Read the article

  • Which Ubuntu linux kernel tree matches my installed kernel?

    - by Rmano
    Answering a recent question, and before that, trying to see if a patch which is fundamental for my machine had been included in a kernel release, I have found the following problem: How can I match the kernel version I have for my kernel, which is [:~] % uname -a Linux samsung-romano 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux with the exact kernel source, which I suppose should be stored in http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=summary? In that page there are quite a lot of tags, for example: But none of them correspond to 3.13.0-29 which is my running kernel right now. The mapping should be in https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable, where it is said that the 3.13 Ubuntu kernel is based on 3.13.11 --- I think. But from there to finding the tree I have installed is not straightforward. Notice: I know I can install the kernel source corresponding with my installed kernel. But I do not want to install them; I would like ti have a pointer to the git tree to be able to browse it online (and check for commits, patches, etc.). The best options seems to go to linux3.13-y.review or linux3.13-y.queue, but I am unable to find where this tree are marked for the release - if I understand well the policy, in -review the patches are accumulated for testing, and in -queue accumulated for the next minor release/update --- but I am unable to find the exact release tree. I mean, a tag equivalent to 3.13.0-29 was cut here.

    Read the article

  • multi-thread in mmorpg server

    - by jean
    For MMORPG, there is a tick function to update every object's state in a map. The function was triggered by a timer in fixed interval. So each map's update can be dispatch to different thread. At other side, server handle player incoming package have its own threads also: I/O threads. Generally, the handler of the corresponding incoming package run in I/O threads. So there is a problem: thread synchronization. I have consider two methods: Synchronize with mutex. I/O thread lock a mutex before execute handler function and map thread lock same mutex before it execute map's update. Execute all handler functions in map's thread, I/O thread only queue the incoming handler and let map thread to pop the queue then call handler function. These two have a disadvantage: delay. For method 1, if the map's tick function is running, then all clients' request need to waiting the lock release. For method 2, if map's tick function is running, all clients' request need to waiting for next tick to be handle. Of course, there is another method: add lock to functions that use data which will be accessed both in I/O thread & map thread. But this is hard to maintain and easy to goes incorrect. It needs carefully check all variables whether or not accessed by both two kinds thread. My problem is: is there better way to do this? Notice that I said map is logic concept means no interactions can happen between two map except transport. I/O thread means thread in 3rd part network lib which used to handle client request.

    Read the article

  • Feedback on meeting of the MSCC - 20.07.2013

    Impression of our meetup on 20.07.2013 Low quantity but high quality! Meetup summary: Quick introduction to ?MSCC? and interesting topics in general, especially for freshman from/at university. It also seems that the open concept of OUYA (Android-based gaming console) got some attention and hopefully new fans. More info is available online: http://www.ouya.tv/ Design contest The design contest is still going on... There's currently only one submission. Come on, you web & graphic designers in Mauritius - SHOW YOUR WORKAny draft will be published over here: MSCC Design Contest - https://www.facebook.com/media/set/?set=a.200036533488751.1073741829.181737551985316&type=3 Goodies give-away And the first 2 one-month subscriptions for Pluralsight have been well received by attendees, too. Unfortunately, we didn't have any free WiFi at Talking Drums - so, we might have to consider another location for the next meetup. Change of schedule As we spoke about the advantages of gathering during the weekend, we worked out a schedule that could be applied to future meetups of the MSCC. I'm going to address this tomorrow during our regular meetings on Wednesday to see about the response of other members, too.

    Read the article

  • &ldquo;Why do transactional messages all have the same priority?&rdquo;

    - by John Breakwell
    I answered this question on the MSMQ forum on MSDN and thought it worth sharing here. The poster wanted to know why all transactional messages have a fixed priority of zero (instead of 0 through 7). They wanted the guaranteed delivery of messages to a queue but wanted to assign different levels of priority. Some aspects of MSMQ were defined way back in the last century and this is one of them. If I remember right, the reason was to avoid the following scenario: You have a single transaction of 3 messages (a, b and c) with priorities 5, 3 and 4 respectively. The messages are sent in order a,b,c The messages arrive in the queue and are arranged in order a,c,b to reflect priority order This breaks the guaranteed order part of the transaction.  I know that very few people send more than one message in a transaction but that is a scenario that MSMQ has to be able to handle; for the majority, including yourself, this scenario is irrelevant which is why you are surprised by the absence of transactional priorities. The options, therefore, available to the Microsoft developers were to: Implement code that allowed you to send messages with variable priority as long as any messages within the same transaction were the same priority, or Define a set priority for all transactional messages As you can understand, option 1 would be a complicated arrangement with all the necessary enforcement, error handling, user education and documentation, etc. Sure, it would have been possible to implement option 1 but I expect the product group decided to invest the development time in some other aspect of MSMQ. Now, with five versions out there, it would be confusing to change how the product operates, in addition to potentially breaking exisiting systems that have been working fine for years. So, balancing cost and risk against customer demand, I would not expect this feature to ever change.

    Read the article

  • use of EntityManagerFactory causing duplicate primary key exceptions

    - by bradd
    Hey guys, my goal is create an EntityManager using properties dependent on which database is in use. I've seen something like this done in all my Google searches(I made the code more basic for the purpose of this question): @PersistenceUnit private EntityManagerFactory emf; private EntityManager em; private Properties props; @PostConstruct public void createEntityManager(){ //if oracle set oracle properties else set postgres properties emf = Persistence.createEntityManagerFactory("app-x"); em = emf.createEntityManager(props); } This works and I can load Oracle or Postgres properties successfully and I can Select from either database. HOWEVER, I am running into issues when doing INSERT statements. Whenever an INSERT is done I get a duplicate primary key exception.. every time! Can anyone shed some light on why this may be happening? Thanks -Brad

    Read the article

  • Deciding between Apache Commons exec or ProcessBuilder

    - by Moev4
    I am trying to decide as to whether to use ProcessBuilder or Commons exec, My requirements are that I am simply trying to create a daemon process whose stdout/stdin/stderr I do not care about. In addition I want to execute a kill to destroy this process when the time comes. I am using Java on Linux. I know that both have their pains and pitfalls (such as being sure to use separate thread to swallow streams can lead to blocking or deadlocks, and closing the streams so not to leave open files hanging around)and wanted to know if anyone had suggestions one way or the other as well as any good resources to follow.

    Read the article

  • SSIS- Sharepoint list data transfer issue

    - by Vicky
    Hi , We are trying to transfer data from oracle database (about 60,0000) records only to a sharepoint list using SSIS. But we are getting following error when records reaches around 19000 . The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020 and System.ServiceModel.ProtocolException: The remote server returned an unexpected response: (400) Bad Request. Earlier we thought if could because of Sharepoint list limit so we tried by reducing two of the columns and then it has went fine. So we left with one of the column of Datatype DT_STR and length 400 in oracle beacuse of which issue might be happening, It is mapped to sharepoint custom list field of multiline type. We also verified if length of field is issue but in oracle DB for all records max length for this column is only 239 so length issue is also ruled out. Any one who has faced this kind of issue or knows cause of this issue.Kindly let us know.. Thanks and regards, Vicky

    Read the article

  • wowza vs Flash Media Server (FMS / FMIS) - ease of integration with ASP.Net

    - by alchemical
    We're creating a web site offering one to many video chat and trying to decide on which of these streaming servers to go with. Looking at around 256kbps live streams, hoping to achieve at least 1000 simultaneous streams on one 8-core server. Wowza is cheaper (1k vs 5k for FMS), and appears to be used successfully by many sites (StreamLive, Justin.TV, etc.). However, some people have expressed that it may be more difficult to work with. I.e. fine-tuning it, less documentation, integration with ASP.Net code, etc. Wondering if anyone with real-world experience with either of these servers can advise regarding how easy or difficult to use and integrate they are for a site like this. Also wondering if there is any performance difference (lag, etc.).

    Read the article

  • retreving long text (CLOB) using CFQuery

    - by CFUser
    I am using CFQuery to retrieve the CLOB field from Oracle DB. If the CLOB filed contains the Data less than ~ 8000, then I can see retrieved the value ( the o/p), however If the value in CLOB field size is more than 8000 chars, then its not retrieving the value. in <cfdump> i can see the query retrieved as 'empty String' though the value exists in Oracle DB. I am using the Oracle Driver in CFadim console enabled 'Enable long text retrieval (CLOB).' and 'Enable binary large object retrieval (BLOB). ' set 'Long Text Buffer (chr)' and 'Blob Buffer(bytes) ' values to 6400000 any suggestions to retrieve the full text?

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >