Search Results

Search found 25148 results on 1006 pages for 'distributed source contr'.

Page 233/1006 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • DRBD and MySQL - Virtualbox Setup

    DRBD is a Linux project that provides a real-time distributed filesystem. Sean Hull demonstrates how to use Sun's virtualbox software to create a pair of VMs, then configure those VMs with DRBD, and finally install and test MySQL running on volumes sitting on DRBD.

    Read the article

  • DRBD and MySQL - Virtualbox Setup

    DRBD is a Linux project that provides a real-time distributed filesystem. Sean Hull demonstrates how to use Sun's virtualbox software to create a pair of VMs, then configure those VMs with DRBD, and finally install and test MySQL running on volumes sitting on DRBD.

    Read the article

  • Tellago releases a RESTful API for BizTalk Server business rules

    - by Charles Young
    Jesus Rodriguez has blogged recently on Tellago Devlabs' release of an open source RESTful API for BizTalk Server Business Rules.   This is an excellent addition to the BizTalk ecosystem and I congratulate Tellago on their work.   See http://weblogs.asp.net/gsusx/archive/2011/02/08/tellago-devlabs-a-restful-api-for-biztalk-server-business-rules.aspx   The Microsoft BRE was originally designed to be used as an embedded library in .NET applications. This is reflected in the implementation of the Rules Engine Update (REU) Service which is a TCP/IP service that is hosted by a Windows service running locally on each BizTalk box. The job of the REU is to distribute rules, managed and held in a central database repository, across the various servers in a BizTalk group.   The engine is therefore distributed on each box, rather than exploited behind a central rules service.   This model is all very well, but proves quite restrictive in enterprise environments. The problem is that the BRE can only run legally on licensed BizTalk boxes. Increasingly we need to deliver rules capabilities across a more widely distributed environment. For example, in the project I am working on currently, we need to surface decisioning capabilities for use within WF workflow services running under AppFabric on non-BTS boxes. The BRE does not, currently, offer any centralised rule service facilities out of the box, and hence you have to roll your own (and then run your rules services on BTS boxes which has raised a few eyebrows on my current project, as all other WCF services run on a dedicated server farm ).   Tellago's API addresses this by providing a RESTful API for querying the rules repository and executing rule sets against XML passed in the request payload. As Jesus points out in his post, using a RESTful approach hugely increases the reach of BRE-based decisioning, allowing simple invocation from code written in dynamic languages, mobile devices, etc.   We developed our own SOAP-based general-purpose rules service to handle scenarios such as the one we face on my current project. SOAP is arguably better suited to enterprise service bus environments (please don't 'flame' me - I refuse to engage in the RESTFul vs. SOAP war). For example, on my current project we use claims based authorisation across the entire service bus and use WIF and WS-Federation for this purpose.   We have extended this to the rules service. I can't release the code for commercial reasons :-( but this approach allows us to legally extend the reach of BRE far beyond the confines of the BizTalk boxes on which it runs and to provide general purpose decisioning capabilities on the bus.   So, well done Tellago.   I haven't had a chance to play with the API yet, but am looking forward to doing so.

    Read the article

  • Article Sharing &ndash; Windows Azure Memcached Plugin

    - by Shaun
    I just found that David Aiken, a windows azure developer and evangelist, wrote a cool article about how to use Memcached in Windows Azure through the new feature Azure Plugin. http://www.davidaiken.com/2011/01/11/windows-azure-memcached-plugin/ I think the best solution for distributed cache in Azure would be the Windows Azure AppFabric Caching but since it’s only in CTP and avaiable in the US data center David’s solution would be the best. Only one thing I’m concerning about, is the stability of windows verion Memcached.

    Read the article

  • Python: Decent config file format

    - by miracle2k
    I'd like to use a configuration file format which supports key value pairs and nestable, repeatable structures, and which is as light on syntax as possible. I'm imagining something along the lines of: cachedir = /var/cache mail_to = [email protected] job { name = my-media frequency = 1 day source { from = /home/michael/Images source { } source { } } job { } I'd be happy with something using significant-whitespace as well. JSON requires too many explicit syntax rules (quoting, commas, etc.). YAML is actually pretty good, but would require the jobs to be defined as a YAML list, which I find slightly awkward to use.

    Read the article

  • Database of common translated phrases for localization?

    - by richardtallent
    I'd like to localize my app into a few other languages, all of which I can barely order a drink in. Does anyone know of an online resource for translations of common software menu options, messages, etc. into other languages? Given the number of developers (both OSS and closed-source) that deal with localization, and the overlap of resource strings between, it seems like a pretty obvious fit for a wiki or open source package, but I can't seem to find anything like this. I could try to mine the Windows resource files or dig around in resource strings in robust OSS apps like Firefox, but I suspect I'm not the first person to think of this and surely there is a site that I'm just not finding yet. Update: since nothing exists like this that I could find, I started a feeble attempt at an open-source string library. It's just a boilerplate Google spreadsheet now, so if you want to contribute, please go here: http://www.xark.org/

    Read the article

  • Oracle Fusion Applications Data Sheets Are Now Available

    - by David Hope-Ross
    For customers chomping at the bit for more information on Oracle Fusion applications, there is good news. We’ve just published  a complete library of data sheets for Oracle Fusion Applications. Included are SCM applications like Fusion Distributed Order Orchestration, Fusion Inventory Management, and Fusion Product Hub. And customers interested in sourcing and procurement should review documents that address Oracle Fusion Sourcing ,Oracle Fusion Procurement Contracts, Oracle Fusion Purchasing, Oracle Fusion Self Service Procurement, and Oracle Fusion Supplier Portal.

    Read the article

  • SourceSafe sharing and branching

    - by Melody Friedenthal
    So far I've only checked things out and back in to Source Safe but now I want to create a project for parallel development. That is, I want to Share and Branch the entire project. I have been reading the Source Safe Help files on how to do this and although I think I am following the instructions, I end up with an empty folder. Can someone enumerate the steps required to do this? Do you start by creating a new project under the Source Safe root so you have something to Share the original project with? Note: We have SourceSafe 6.0.

    Read the article

  • What's the cause of (and treatment for) this java MySQL exception?

    - by justkevin
    I'm getting the following exception when executing the first preparedstatement after a period of inactivity: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet successfully received from the server was 2,855,054 milliseconds ago. The last packet sent successfully to the server was 123 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3052) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2938) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3481) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1959) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2109) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2648) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2077) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2228) This only shows up if my application hasn't communicated with MySQL recently. Subsequent queries execute normally. I suspect it's some kind of timeout issue, but the periods of inactivity are way below the 8 hour timeout for MySQL. Any suggestions?

    Read the article

  • Nagios: Central Monitoring

    <b>Begin Linux:</b> "This is part two of a three part series on distributed monitoring. You can usehttp://newsadmin.linuxtoday.com/ passive service and host checks to allow non-central Nagios servers to collect data from a network of machines and then transfer that information to a central Nagios server."

    Read the article

  • DLL export of a static function

    - by Begbie00
    Hi all - I have the following static function: static inline HandVal StdDeck_StdRules_EVAL_N( StdDeck_CardMask cards, int n_cards ) Can I export this function in a DLL? If so, how? Thanks, Mike Background information: I'm doing this because the original source code came with a VS project designed to compile as a static (.lib) library. In order to use ctypes/Python, I'm converting the project to a DLL. I started a VS project as a DLL and imported the original source code. The project builds into a DLL, but none of the functions (including functions such as the one listed above) are exported (as confirmed by both the absence of dllexport in the source code and tools such as DLL Export Viewer). I tried to follow the general advice here (create an exportable wrapper function within the header) to no avail...functions still don't appear to be exported.

    Read the article

  • Does Team Foundation Server supports Checkpoints?

    - by marco.ragogna
    My dev team used in the past MKS Source Integrity source control and we are not evaluating to migrate to TFS 2010. Some concepts and meaning are a bit different and we need sometime to learn how to do the same things we do before in TFS or how to change our approach. First of all, we used to do Checkpoints for each software release. MKS in this case does a snapshot of all source code files. You can later compare different checkpoints to see the code differences, or extract a whole checkpoint as a build. Does TFS have a similar feature? Do you know where can I read something about it? Thanks in advance, Marco

    Read the article

  • [VB.Net] System.IO will copy files, but fails to update destinations file attributes

    - by CFP
    Hello, I have a little vb.net script that will copy a file, set its attributes to Normal, update the file time, and then set back the attributes to match those of the source file. If IO.File.Exists(Destination) Then IO.File.SetAttributes(Destination, IO.FileAttributes.Normal) IO.File.Copy(Source, Destination, True) IO.File.SetAttributes(Destination, IO.FileAttributes.Normal) IO.File.SetLastWriteTimeUtc(Destination, IO.File.GetLastWriteTimeUtc(Destination).AddHours(1)) IO.File.SetAttributes(Destination, IO.File.GetAttributes(Source)) I however I'm encountering a quite strange problem. On some configurations, IO.File.SetLastWriteTimeUtc triggers an UnauthorizedAccess error, although the IO.File.Copy instruction worked very well. I'm totally puzzled: I've checked, and file attributes are set to 128 (ie. Normal) successfully. The problem seems to be with the very SetLastWriteTimeUtc. But what is it? Any ideas? Thanks a lot!

    Read the article

  • Linux Kernel - programmatically retrieve block numbers as they are written to

    - by SpdStr
    I want to maintain a list of block numbers as they are physically written to using the linux kernel source. I plan to modify the kernel source to do this. I just need to find the structure and functions in the kernel source that handle writing to physical partitions and get the block numbers as they write to the physical partition. Any way of doing this? Any help is appreciated. If I can find where the kernel is actually writing to the partitions and returning the block numbers, that'd work.

    Read the article

  • How to apply Ubuntu patch for rpcbind?

    - by Linda
    I am currently running Ubuntu 12.04.1 Desktop and would like to apply this patch: https://launchpad.net/ubuntu/+source/rpcbind/0.2.0-7ubuntu1.2 My current rpcbind version is here: # aptitude show rpcbind Package: rpcbind State: installed Automatically installed: yes Version: 0.2.0-7ubuntu1.1 As you can see on the patch page, I'd like to patch to this version: Version: 0.2.0-7ubuntu1.2 However, based on the downloadable files on the patch page, I'm not sure where to start. (directory structure of the original rpcbind source) # find rpcbind-0.2.0 -type d rpcbind-0.2.0 rpcbind-0.2.0/src rpcbind-0.2.0/man (directory structure of the patch download) # find debian -type d debian debian/patches debian/source [EDIT] I've figured out how to apply the individual patches in the patches directory: # patch -p1 < ../debian/patches/01-usage-fix.patch patching file src/rpcbind.c (and so on for each patch file) ... but I'm not sure what do with the patch-related files in the root debian folder. Any help here? Thanks in advance, Linda

    Read the article

  • shared library in C

    - by sasayins
    Hi, I am creating a shared library in C but don't know what is the correct implementation of the source codes. I want to create an API like for example, printHello, int printHello( char * text ); This printHello function will call another function: In source file, libprinthello.c, void printHello( char * text ) { printHi(); printf("%s", text); } Since this printHello function is the interface for the user or application: In header file libprinthello.h, extern void printHello( char * text); Then in the source file of the printHi function, printhi.c void printHi() { printf("Hi\n"); } Then my problem is, since printHello is the only function that I want to expose in the user, what implementation should I do in printHi function?

    Read the article

  • WPF - 'Relational' Data in XAML Using DataContext

    - by Andy T
    Hi, Say I have a list of Employee IDs from one data source and a separate data source with a list of Employees, with their ID, Surname, FirstName, etc. Is it possible in XAML only to get the Employee's name from the second data source and display it next to the ID, using something like this (with the syntax corrected)?.. <TextBlock x:Name="EmployeeID" Text="{Binding ID}"></TextBlock> <TextBlock Grid.Column="1" DataContext="{StaticResource EmployeeList[**where ID = {Binding ID}**]}" Text="{Binding Surname}"/> I'm thinking back to my days using XML and XSLT with XPath to achieve the kind of thing shown above. Is this kind of thing possible in XAML? Or do I need to 'denormalize' the data first in code, into one consolidated list? It seems like it should be possible to do this simple task using XAML only, but I can't quite get my head around how you would switch the DataContext correctly and what the syntax would be to achieve this. Is it possible, or am I barking up the wrong tree? Thanks, AT

    Read the article

  • Why is WPO(whole-program optimization) not doing any improvements in my program size? (FPC 2.4.0)

    - by Gregory Smith
    I use FPC 2.4.0 for WinXP(binary from the official page), also tryed with same version but compiled from source on my comp. I put something like this: I:\pascal\fpc-2.4.0.source\fpc-2.4.0\compiler\ppc386 -FWserver-1.wpo -OWsymbolliveness -CX -XX -Xs- -al -Os -oServer1.o Server I:\pascal\fpc-2.4.0.source\fpc-2.4.0\compiler\ppc386 -FWserver-2.wpo -OWsymbolliveness -Fwserver-1.wpo -Owsymbolliveness -CX -XX -Xs- -al -Os -oServer2.o Server ..(up to 100 times) but always same .wpo files, and same .o sizes(.s, assembly files change intermittently) I also not(through compiler messages), that not used variables are still alive. Also tryed -OWall -owall What am i doing wrong?

    Read the article

  • How to use infinit live streams with JAVE library? (Java, ffmpeg)

    - by Ole Jak
    So I want to use JAVE to save mp3 radio stream to my File system. I have this code for file saving but what shall I do to save a stream (stop on timer for ex) File source = new File("source.wav"); File target = new File("target.mp3"); AudioAttributes audio = new AudioAttributes(); audio.setCodec("libmp3lame"); audio.setBitRate(new Integer(128000)); audio.setChannels(new Integer(2)); audio.setSamplingRate(new Integer(44100)); EncodingAttributes attrs = new EncodingAttributes(); attrs.setFormat("mp3"); attrs.setAudioAttributes(audio); Encoder encoder = new Encoder(); encoder.encode(source, target, attrs);

    Read the article

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • python object AttributeError: type object 'Track' has no attribute 'title'

    - by ccwhite1
    I apologize if this is a noob question, but I can't seem to figure this one out. I have defined an object that defines a music track (NOTE: originally had the just ATTRIBUTE vs self.ATTRIBUTE. I edited those values in to help remove confusion. They had no affect on the problem) class Track(object): def __init__(self, title, artist, album, source, dest): """ Model of the Track Object Contains the followign attributes: 'Title', 'Artist', 'Album', 'Source', 'Dest' """ self.atrTitle = title self.atrArtist = artist self.atrAlbum = album self.atrSource = source self.atrDest = dest I use ObjectListView to create a list of tracks in a specific directory ....other code.... self.aTrack = [Track(sTitle,sArtist,sAlbum,sSource, sDestDir)] self.TrackOlv.AddObjects(self.aTrack) ....other code.... Now I want to iterate the list and print out a single value of each item list = self.TrackOlv.GetObjects() for item in list: print item.atrTitle This fails with the error AttributeError: type object 'Track' has no attribute 'atrTitle' What really confuses me is if I highlight a single item in the Object List View display and use the following code, it will correctly print out the single value for the highlighted item list = self.TrackOlv.GetSelectedObject() print list.atrTitle

    Read the article

  • Eclipse launch error when trying to run an Android app.

    - by user259642
    I'm trying to set up my workstation for Android development with Eclipse Galileo. I installed the latest ADT plugin and the Android SDK, but I get this error when I try to run any basic Android project I create. eclipse.buildId=M20090917-0800 java.version=1.6.0_17 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_US Framework arguments: -product org.eclipse.epp.package.java.product -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product -data C:\Documents and Settings\dmcnamar\workspace -product org.eclipse.epp.package.java.product Error Tue Jan 26 18:00:41 EST 2010 An internal error occurred during: "Launching HelloWorld". java.lang.NullPointerException at com.android.ide.eclipse.adt.internal.launch.AndroidLaunchController.launch(Unknown Source) at com.android.ide.eclipse.adt.internal.launch.LaunchConfigDelegate.doLaunch(Unknown Source) at com.android.ide.eclipse.adt.internal.launch.LaunchConfigDelegate.launch(Unknown Source) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:853) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:703) at org.eclipse.debug.internal.ui.DebugUIPlugin.buildAndLaunch(DebugUIPlugin.java:866) at org.eclipse.debug.internal.ui.DebugUIPlugin$8.run(DebugUIPlugin.java:1069) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • How to exclude copy local referenced assemblies from a VSIX

    - by Daniel Cazzulino
    When you add library references to project that are not reference assemblies or installed in the GAC, Visual Studio defaults to setting Copy Local to True: If, however, those dependencies are distributed by some other means (i.e. another extension, or are part of VS private assemblies, or whatever) and you want to avoid including them in your VSIX, you can add the following property to the project file: &lt;PropertyGroup&gt; ... &lt;IncludeCopyLocalReferencesInVSIXContainer&gt;false&lt;/IncludeCopyLocalReferencesInVSIXContainer&gt;Read full article

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >