Search Results

Search found 16386 results on 656 pages for 'relative path'.

Page 405/656 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Weird y offset when using custom frag shader (Cocos2d-x)

    - by Mister Guacamole
    I'm trying to mask a sprite so I wrote a simple fragment shader that renders only the pixels that are not hidden under another texture (the mask). The problem is that it seems my texture has its y-coordinate offset after passing through the shader. This is the init method of the sprite (GroundZone) I want to mask: bool GroundZone::initWithSize(Size size) { // [...] // Setup the mask of the sprite m_mask = RenderTexture::create(textureWidth, textureHeight); m_mask->retain(); m_mask->setKeepMatrix(true); Texture2D *maskTexture = m_mask->getSprite()->getTexture(); maskTexture->setAliasTexParameters(); // Disable linear interpolation on the mask // Load the custom frag shader with a default vert shader as the sprite’s program FileUtils *fileUtils = FileUtils::getInstance(); string vertexSource = ccPositionTextureA8Color_vert; string fragmentSource = fileUtils->getStringFromFile( fileUtils->fullPathForFilename("CustomShader_AlphaMask_frag.fsh")); GLProgram *shader = new GLProgram; shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str()); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS); shader->link(); CHECK_GL_ERROR_DEBUG(); shader->updateUniforms(); CHECK_GL_ERROR_DEBUG(); int maskTexUniformLoc = shader->getUniformLocationForName("u_alphaMaskTexture"); shader->setUniformLocationWith1i(maskTexUniformLoc, 1); this->setShaderProgram(shader); shader->release(); // [...] } These are the custom drawing methods for actually drawing the mask over the sprite: You need to know that m_mask is modified externally by another class, the onDraw() method only render it. void GroundZone::draw(Renderer *renderer, const kmMat4 &transform, bool transformUpdated) { m_renderCommand.init(_globalZOrder); m_renderCommand.func = CC_CALLBACK_0(GroundZone::onDraw, this, transform, transformUpdated); renderer->addCommand(&m_renderCommand); Sprite::draw(renderer, transform, transformUpdated); } void GroundZone::onDraw(const kmMat4 &transform, bool transformUpdated) { GLProgram *shader = this->getShaderProgram(); shader->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, m_mask->getSprite()->getTexture()->getName()); glActiveTexture(GL_TEXTURE0); } Below is the method (located in another class, GroundLayer) that modify the mask by drawing a line from point start to point end. Both points are in Cocos2d coordinates (Point (0,0) is down-left). void GroundLayer::drawTunnel(Point start, Point end) { // To dig a line, we need first to get the texture of the zone we will be digging into. Then we get the // relative position of the start and end point in the zone's node space. Finally we use the custom shader to // draw a mask over the existing texture. for (auto it = _children.begin(); it != _children.end(); it++) { GroundZone *zone = static_cast<GroundZone *>(*it); Point nodeStart = zone->convertToNodeSpace(start); Point nodeEnd = zone->convertToNodeSpace(end); // Now that we have our two points converted to node space, it's easy to draw a mask that contains a line // going from the start point to the end point and that is then applied over the current texture. Size groundZoneSize = zone->getContentSize(); RenderTexture *rt = zone->getMask(); rt->begin(); { // Draw a line going from start and going to end in the texture, the line will act as a mask over the // existing texture DrawNode *line = DrawNode::create(); line->retain(); line->drawSegment(nodeStart, nodeEnd, 20, Color4F::RED); line->visit(); } rt->end(); } } Finally, here's the custom shader I wrote. #ifdef GL_ES precision mediump float; #endif varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_alphaMaskTexture; void main() { float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a; float texAlpha = texture2D(u_texture, v_texCoord).a; float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is invisible vec3 texColor = texture2D(u_texture, v_texCoord).rgb; gl_FragColor = vec4(texColor, blendAlpha); return; } I got a problem with the y coordinates. Indeed, it seems that once it has passed through my custom shader, the sprite's texture is not at the right place: Without custom shader (the sprite is the brown thing): With custom shader: What's going on here? Thanks :) EDIT It looks like after passing through the shader when I set the position of the sprite I set it in points, with (0,0) being in the top-right. Indeed, when I do sprite->setPosition(320, 480), the sprite is perfectly placed at the top of the screen.

    Read the article

  • How do I choose a package format for Linux software distribution?

    - by Ian C.
    We have a Java-based application that, to date, we've been distributing as a tarball with instructions for deploying. It's mostly self-contained so deployment is fairly straight-forward: Untar on the disk you'd like it to live on; Make sure Java is in your path and a suitable distro and version; Verify ownership and group on all the files Start up the server processes with our start script If the user wants to get in to start-on-boot stuff with SysV we have some written instructions and a template init file for it in our tarball. We'd like to make this installation process a little more seamless; take care of the permissions and the init script deployment. We're also going to start bundling our own JRE with the application so that we're mostly free of external dependencies. The question we're faced with now is: how do we pick a package format for distribution? Is RPM the standard? Can all package management tools deal with it now? Our clients primarily run RHEL and CentOS, but we do have some using SuSE and even Debian. If we can pick a distro-agnostic format we'd prefer that. What about a self-extracting shell script? Something akin to how Java is distributed. If we're dependency-free would the self-extracting script be sufficient? What features or conveniences would we lose out on going with the script versus a proper package format meant for use by a package manager?

    Read the article

  • Apache Server-Side Includes Refuse to Work (Tried everything in the docs but still no joy)

    - by raindog308
    Trying to get apache server-side includes to work. Really simple - just want to include a footer on each page. Apache 2.2: # ./httpd -v Server version: Apache/2.2.21 (Unix) Server built: Dec 4 2011 18:24:53 Cpanel::Easy::Apache v3.7.2 rev9999 mod_include is compiled in: # /usr/local/apache/bin/httpd -l | grep mod_include mod_include.c And it's in httpd.conf: # grep shtml httpd.conf AddType text/html .shtml DirectoryIndex index.html.var index.htm index.html index.shtml index.xhtml index.wml index.perl index.pl index.plx index.ppl index.cgi index.jsp index.js index.jp index.php4 index.php3 index.php index.phtml default.htm default.html home.htm index.php5 Default.html Default.htm home.html AddHandler server-parsed .shtml AddType text/html .shtml In the web directory I created a .htaccess with Options +Includes And then in the document, I have: <h1>next should be the include</h1> <!--#include virtual="/footer.html" --> <h1>include done</h1> And I see nothing in between those headers. Tried file=, also with/without absolute path. Is there something else I'm missing? I see the same thing on another unrelated server (more or less stock CentOS 6), so I suspect the problem is between keyboard and chair...

    Read the article

  • cannot move file: "file cannot be found" (Windows XP)

    - by Steve
    I have some CR2 files in a subfolder of My Documents called My Photos on a Windows XP PC. I want to move them across a WIFI network to an external HDD attached to a Windows 7 PC. I have read/write permissions on the external HDD, as I mapped to this HDD using the Windows 7 user account. When I try to move a single CR2 file, I receive "Cannot copy IMG_3317: Cannot find the specified file. Make sure you specify the correct path and file name." If I refresh the source folder, the file is still there. It is not read only, and I have read/write access to the source file. I can view its properties. Why can't I move this file? I have been able to move similar files in the past.

    Read the article

  • Should EC2 server self register or have admin server

    - by hortitude
    I'm creating AWS servers using chef. I am also planning on enabling auto-scaling. While we have automatic monitoring setup already (server density or nagios etc), I was also going to setup the cloud watch alarm for the status check on the server. This led me down the path of trying to decide whether to install the ec2-command-line tools on the server itself (which then requires me to install java on the servers -- despite no other need for Java in our environment) or to possible have an "admin" box that will check periodically for servers and make sure they have their alarms set. I expect this paradigm to carry over to other things we want to configure (perhaps ensuring that termination protection is setup on production servers?). Any thoughts as to why to go one direction or another?

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • Inline Image in ASP.NET

    - by Ricardo Peres
    Inline images is a technique that, instead of referring to an external URL, includes all of the image’s content in the HTML itself, in the form of a Base64-encoded string. It avoids a second browser request, at the cost of making the HTML page slightly heavier and not using cache. Not all browsers support it, but current versions of IE, Firefox and Chrome do. In order to use inline images, you must write the img element’s src attribute like this: 1: <img src="data:image/gif;base64,R0lGODlhEAAOALMAAOazToeHh0tLS/7LZv/0jvb29t/f3//Ub/ 2: /ge8WSLf/rhf/3kdbW1mxsbP//mf///yH5BAAAAAAALAAAAAAQAA4AAARe8L1Ekyky67QZ1hLnjM5UUde0ECwLJoExKcpp 3: V0aCcGCmTIHEIUEqjgaORCMxIC6e0CcguWw6aFjsVMkkIr7g77ZKPJjPZqIyd7sJAgVGoEGv2xsBxqNgYPj/gAwXEQA7" 4: width="16" height="14" alt="embedded folder icon"/> The syntax is: data:[<mediatype>][;base64],<data> I developed a simple control that allows you to use inline images in your ASP.NET pages. Here it is: 1: public class InnerImage: Image 2: { 3: protected override void OnInit(EventArgs e) 4: { 5: String imagePath = this.Context.Server.MapPath(this.ImageUrl); 6: String extension = Path.GetExtension(imagePath).Substring(1); 7: Byte[] imageData = File.ReadAllBytes(imagePath); 8:  9: this.ImageUrl = String.Format("data:image/{0};base64,{1}", extension, Convert.ToBase64String(imageData)); 10:  11: base.OnInit(e); 12: } 13: } Simple, don’t you think?

    Read the article

  • Cannot access Windows 7 share from Windows XP

    - by artfulrobot
    I have a new Windows 7 machine named PAP44 in the PAP workgroup. The networking is set to "Work" mode for the wired LAN. I have a couple of users and I've shared a folder and set it so both users can read/write. Confusingly for me, rather than sharing just that folder (as I'm used to with older versions of Windows) it appears to be sharing a path (\\pap44\users\...\myFolder) From another machine on the LAN, running XP, when I go to \\PAP44\Users I'm asked for a username and password, but neither of the usernames+passwords work. It just jumps back to the username and password dialogue, except that the username I entered gets prefixed with PAP44\ My end goal is to get my Debian/Ubuntu machines to be able to access this share, but first of all I thought I'd try to get it working in Windows, after all, that's supposed to be easy! Is there another step? (PS. I am not a "hit and run" case!)

    Read the article

  • Lotus Notes - Fails to move inbox article to subfolder

    - by Klaptrap
    Lotus Notes 8.5 - yuck! User is trying to move an email from the Inbox into a subfolder and is using the menu option (not dragging & dropping) the email simply fails to move. It just appears again in the inbox. Drag and drop has the same effect! I have checked that it is a failure to move and not just a copy/delete issue. I am not a Notes adminstrator, being more of a MS Exchange administrator by career path and have no idea what to be looking for.

    Read the article

  • Running ssh-agent from a shell script

    - by Dan
    I'm trying to create a shell script that, among other things, starts up ssh-agent and adds a private key to the agent. Example: #!/bin/bash # ... ssh-agent $SHELL ssh-add /path/to/key # ... The problem with this is ssh-agent apparently kicks off another instance of $SHELL (in my case, bash) and from the script's perspective it's executed everything and ssh-add and anything below it is never run. How can I run ssh-agent from my shell script and keep it moving on down the list of commands?

    Read the article

  • Git Daemon on linux?

    - by bwawok
    Trying to set up a simple git-daemon on a linux server, and talk to it from a windows box. On linux server: Make a folder /home/foo/bar CD to /home/foo/bar do a git --bare init here Do a touch git-daemon-export-ok CD to /home/foo Run the command git-daemon --verbose --reuseaddr --base-path=/home/foo --enable=receive-pack On Windows Client w tortoise Git Do git.exe clone --progress -v "git://servername/bar" "C:\source\myFolderName" (works) Create file a.txt, add it to git, and commit (works) Do a git.exe pull "origin" master and then get fatal: Couldn't find remote ref master (makes sense, master isn't there yet) Do a git.exe push "origin" master:master and tortoise hangs forever without do anything I realize why I can't pull from master yet on the remote branch.. but why can't I push my first commit into the remote repo? #4 really should work. Tried it both with tortoise and the mysysgit command line, both cases I hang forever. What am I missing? Server has no useful log

    Read the article

  • Git Daemon on linux?

    - by bwawok
    Trying to set up a simple git-daemon on a linux server, and talk to it from a windows box. On linux server: Make a folder /home/foo/bar CD to /home/foo/bar do a git --bare init here Do a touch git-daemon-export-ok CD to /home/foo Run the command git-daemon --verbose --reuseaddr --base-path=/home/foo --enable=receive-pack On Windows Client w tortoise Git Do git.exe clone --progress -v "git://servername/bar" "C:\source\myFolderName" (works) Create file a.txt, add it to git, and commit (works) Do a git.exe pull "origin" master and then get fatal: Couldn't find remote ref master (makes sense, master isn't there yet) Do a git.exe push "origin" master:master and tortoise hangs forever without do anything I realize why I can't pull from master yet on the remote branch.. but why can't I push my first commit into the remote repo? #4 really should work. Tried it both with tortoise and the mysysgit command line, both cases I hang forever. What am I missing? Server has no useful log

    Read the article

  • SetEnvIf regex for setting Content-Disposition HTTP header

    - by Erik Sorensen
    I am attempting to use the IHS 7.0/apache 2.2 SetEnvIf directive to set the filename of a downloaded file based on a url parameter. I think I am pretty close, however if there is a space (encoded or otherwise) in the filename - it fails. example url: http://site.com/path/to/filename.ext/file-title=Nice File Name.ext?file-type=foo apache config: SetEnvIf Request_URI "^.*file-title\=(.*)\??.*$" FILENAME=$1 Header unset "Content-Disposition" Header add "Content-Disposition" "attachment; filename=%{FILENAME}e" UnsetEnv FILENAME An application will specify what is now showing up as "Nice File Title.ext" in the example. This all works great if there are no spaces, however - if there is a space the filename to download will just show up as "Nice". There may or may not be a second set of parameters in the query string (?file-type, etc)

    Read the article

  • Joining an Active Directory domain using netdom

    - by Cheezo
    I have a simple script to join an AD domain and rename the computer. When I execute these commands directly on the CLI, it works fine. When I execute the same via batch file, I get an error saying The network path was not found I am running as Administrator with full privileges. I have googled around microsoft forums but my case is unique because it works from the CLI and not from the batch file netdom join %%computername%% /domain:OPSCODEDEMO.COM /userd:Administrator /passwordd:xxx netdom renamecomputer %%computername%% /NewName:%hostname% /Force The environment is Windows 2k8 R2 SP1 running on Ninefold Cloud (Xenserver).

    Read the article

  • snmpd agent sends duplicate traps

    - by jsnmp
    I am on Ubuntu 10.04.4 LTS, and I cannot upgrade to a higher version. I have installed the snmpd agent (NET-SNMP version 5.4.2.1) with an apt-get install snmpd command. When an event occurs which sends a trap, two traps are sent for each such event instead of one. For example, when I shut down the agent with command /etc/init.d/snmpd stop, two shutdown traps are sent to the destination host. If I then start back up the agent with command /etc/init.d/snmpd start, then two cold start traps are sent to the destination host. Is this a known issue? Is there a fix for this, or is there a configuration change that is needed to prevent the sending of the duplicate trap? These are the contents of the /etc/snmp/snmpd.conf file: rocommunity public authtrapenable 1 trap2sink <trap destination hostname> public These are the contents of the /etc/default/snmpd file: # This file controls the activity of snmpd and snmptrapd # MIB directories. /usr/share/snmp/mibs is the default, but # including it here avoids some strange problems. export MIBDIRS=/usr/share/snmp/mibs # snmpd control (yes means start daemon). SNMPDRUN=yes # snmpd options (use syslog, close stdin/out/err). SNMPDOPTS='-Ls3d -Lf /dev/null -u snmp -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf' # snmptrapd control (yes means start daemon). As of net-snmp version # 5.0, master agentx support must be enabled in snmpd before snmptrapd # can be run. See snmpd.conf(5) for how to do this. TRAPDRUN=no # snmptrapd options (use syslog). TRAPDOPTS='-Lsd -p /var/run/snmptrapd.pid' # create symlink on Debian legacy location to official RFC path SNMPDCOMPAT=yes

    Read the article

  • In Puppet, how would I secure a password variable (in this case a MySQL password)?

    - by Beaming Mel-Bin
    I am using Puppet to provision MySQL with a parameterised class: class mysql::server( $password ) { package { 'mysql-server': ensure => installed } package { 'mysql': ensure => installed } service { 'mysqld': enable => true, ensure => running, require => Package['mysql-server'], } exec { 'set-mysql-password': unless => "mysqladmin -uroot -p$password status", path => ['/bin', '/usr/bin'], command => "mysqladmin -uroot password $password", require => Service['mysqld'], } } How can I protect $password? Currently, I removed the default world readable permission from the node definition file and explicitly gave puppet read permission via ACL. I'm assuming others have come across a similar situation so perhaps there's a better practice.

    Read the article

  • Turn off Windows Defender on your builds

    - by george_v_reilly
    I've spent some time this evening profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files. Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive. A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6. Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have. I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe. What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction. The moral of this story is: don't let your virus checker run on your builds.

    Read the article

  • Calculating Cloud Service Costs [closed]

    - by capdragon
    Possible Duplicate: Can you help me with my capacity planning? I would like to scale a web application to the Cloud and wanted to know if anyone had any experience calculating costs and could tell me how I would go about that. I have NO experience with cloud services at all. Currently my production environment consists of two web servers and one database server. If the application continues on a linear growth path, eventually, I want to scale to the cloud to avoid any more long term commitments to extra hardware. I want to be able to create a similar environment I have now as a baseline. Have this as my fixed cost that I will always have. I also want to calculate my variable costs that will increase with more users or bandwidth. I don't have a preferred cloud vendor. Amazon, Rackspace, Terremark or any other is fine as long as understand how to calculate my fixed and variable costs.

    Read the article

  • generate correctly a self signed certificate Zimbra

    - by rkmax
    I have a Single mail server with Zimbra 8.0.0 for generate certificate I'm following Generate the cert. ORG=MyOrganization CN=mail.mydomain.com COUNTRY=myCountry CITY=myCity /opt/zimbra/bin/zmcertmgr createcrt -new -days 365 -subject "/C=$COUNTRY/ST=N/A/L=$CITY/O=$ORG/OU=ZCS/CN=$CN" /opt/zimbra/bin/zmcertmgr deploycrt self -allserver su - zimbra "zmcontrol restart" Veririficate with /opt/zimbra/bin/zmcertmgr viewdeployedcrt. i can see the new cert In Chrome go to https://mail.mydomain.com and export the .cer test in a Windows client certutil.exe -addstore root \path\to\exported.cert root "Root Certification Authorities trusted" You can add a root certificate to the root store CertUtil:-addstore command error: 0x8007000d (WIN32: 13) CertUtil: Invalid data. even from chrome i've tried to add the cert without successful results. can anyone help me with this problem?

    Read the article

  • Row Number Transformation

    The Row Number Transformation calculates a row number for each row, and adds this as a new output column to the data flow. The column number is a sequential number, based on a seed value. Each row receives the next number in the sequence, based on the defined increment value. The final row number can be stored in a variable for later analysis, and can be used as part of a process to validate the integrity of the data movement. The Row Number transform has a variety of uses, such as generating surrogate keys, or as the basis for a data partitioning scheme when combined with the Conditional Split transformation. Properties Property Data Type Description Seed Int32 The first row number or seed value. Increment Int32 The value added to the previous row number to make the next row number. OutputVariable String The name of the variable into which the final row number is written post execution. (Optional). The three properties have been configured to support expressions, or they can set directly in the normal manner. Expressions on components are only visible on the hosting Data Flow task, not at the individual component level. Sometimes the data type of the property is incorrectly set when the properties are created, see the Troubleshooting section below for details on how to fix this. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Number transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Number Transformation for SQL Server 2005 Row Number Transformation for SQL Server 2008 Row Number Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.2.0.34 – Updated installer. (25 Jun 2008) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.0.0.0 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshot Code Sample The following code sample demonstrates using the Data Generator Source and Row Number Transformation programmatically in a very simple package. Package package = new Package(); package.Name = "Data Generator & Row Number"; // Add the Data Flow Task Executable taskExecutable = package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = taskExecutable as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add Data Generator Source IDTSComponentMetaData100 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "Data Generator"; componentSource.ComponentClassID = "Konesans.Dts.Pipeline.DataGenerator.DataGenerator, Konesans.Dts.Pipeline.DataGenerator, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceSource = componentSource.Instantiate(); instanceSource.ProvideComponentProperties(); instanceSource.SetComponentProperty("RowCount", 10000); // Add Row Number Tx IDTSComponentMetaData100 componentRowNumber = dataFlowTask.ComponentMetaDataCollection.New(); componentRowNumber.Name = "FlatFileDestination"; componentRowNumber.ComponentClassID = "Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceRowNumber = componentRowNumber.Instantiate(); instanceRowNumber.ProvideComponentProperties(); instanceRowNumber.SetComponentProperty("Increment", 10); // Connect the two components together IDTSPath100 path = dataFlowTask.PathCollection.New(); path.AttachPathAndPropagateNotifications(componentSource.OutputCollection[0], componentRowNumber.InputCollection[0]); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif package.Execute(); foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } package.Dispose(); Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? Please also make sure you have installed a minimum of SP1 for SQL 2005. The IDtsPipelineEnvironmentService was added in SQL Server 2005 Service Pack 1 (SP1) (See  http://support.microsoft.com/kb/916940). If you get an error Could not load type 'Microsoft.SqlServer.Dts.Design.IDtsPipelineEnvironmentService' from assembly 'Microsoft.SqlServer.Dts.Design, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'. when trying to open the user interface, it implies that your development machine has not had SP1 applied. Very occasionally we get a problem to do with the properties not being created with the correct data type. Since there is no way to programmatically to define the data type of a pipeline component property, it can only infer it. Whilst we set an integer value as we create the property, sometimes SSIS decides to define it is a decimal. This is often highlighted when you use a property expression against the property and get an error similar to Cannot convert System.Int32 to System.Decimal. Unfortunately this is beyond our control and there appears to be no pattern as to when this happens. If you do have more information we would be happy to hear it. To fix this issue you can manually edit the package file. In Visual Studio right click the package file from the Solution Explorer and select View Code, which will open the package as raw XML. You can now search for the properties by name or the component name. You can then change the incorrect property data types highlighted below from Decimal to Int32. <component id="37" name="Row Number Transformation" componentClassID="{BF01D463-7089-41EE-8F05-0A6DC17CE633}" … >     <properties>         <property id="38" name="UserComponentTypeName" …>         <property id="41" name="Seed" dataType="System.Int32" ...>10</property>         <property id="42" name="Increment" dataType="System.Decimal" ...>10</property>         ... If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • Firefox 4 crashing in VNC

    - by MacThePenguin
    On my desktop PC I've been using Kubuntu 10.04 for about a year. Recently I've updated Firefox to version 4 using aptitude and the mozilla-team/firefox-stable repository. Since then, I can't run it when I'm logged in through a VNC session. Firefox crashes immediately: when I try to run it from the console I get this error: ###!!! ABORT: X_ShmPutImage: BadShmSeg (invalid shared segment parameter); 3 requests ago: file /build/buildd/firefox-4.0.1+build1+nobinonly/build-tree/mozilla/toolkit/xre/nsX11ErrorHandler.cpp, line 203 ###!!! ABORT: X_ShmPutImage: BadShmSeg (invalid shared segment parameter); 3 requests ago: file /build/buildd/firefox-4.0.1+build1+nobinonly/build-tree/mozilla/toolkit/xre/nsX11ErrorHandler.cpp, line 203 Firefox works fine when I run it directly from the PC. Firefox 3.x worked fine also from a VNC session. I tried to turn off the hardware acceleration from the Firefox preferences, but that doesn't fix the problem. firefox --sync, firefox -safe-mode and firefox -ProfileManager also crash the same way. Any idea how to troubleshoot this? Thanks. Edit: additional info. I run vnc (RealVNC 4.1.1) from xinetd, this is the config I use: service Xvnc { type = UNLISTED disable = no socket_type = stream protocol = tcp wait = yes user = root server = /usr/bin/Xvnc4 server_args = -inetd :1 -desktop vnc5901 -query localhost -geometry 1160x675 -depth 16 -once -DisconnectClients=0 -NeverShared passwordFile=/path/to/vnc/password -render port = 5901 }

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • Perl module error on solaris-10

    - by ramesh.mimit
    I have installed perl and pm_dbdmysql perl module on solaris-10. I have a perl script which makes the mysql DB connection to a diff server and runs some queries and returns the results. Its working fine on linux(redhat) but when I am running the script on solaris-10 its giving me the below error: 2010-12-14 00:00:00 and 2010-12-14 23:59:59DAILY INSIDE : 2010-12-14 00:00:00 -- 2010-12-14 23:59:59 install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /usr/local/lib/perl5/5.10.1/i86pc-solaris /usr/local/lib/perl5/5.10.1 /usr/local/lib/perl5/site_perl/5.10.1/i86pc-solaris /usr/local/lib/perl5/site_perl/5.10.1 .) at (eval 15) line 3. Perhaps the DBD::mysql perl module hasn't been fully installed, or perhaps the capitalisation of 'mysql' isn't right. Available drivers: DBM, ExampleP, File, Gofer, Multiplex, Proxy, Sponge, Sybase. at cerberus_report.pl line 114 Though dbd-mysql perl module is already installed. PKGINST: CSWpmdbdmysql NAME: pm_dbdmysql - MySQL driver for the Perl5 Database Interface (DBI) Is it something related to the path variables to need some other perl moudule dependency!

    Read the article

  • How can I move along an angled collision at a constant speed?

    - by Raven Dreamer
    I have, for all intents and purposes, a Triangle class that objects in my scene can collide with (In actuality, the right side of a parallelogram). My collision detection and resolution code works fine for the purposes of preventing a gameobject from entering into the space of the Triangle, instead directing the movement along the edge. The trouble is, the maximum speed along the x and y axis is not equivalent in my game, and moving along the Y axis (up or down) should take twice as long as an equivalent distance along the X axis (left or right). Unfortunately, these speeds apply to the collision resolution too, and movement along the blue path above progresses twice as fast. What can I do in my collision resolution to make sure that the speedlimit for Y axis movement is obeyed in the latter case? Collision Resolution for this case below (vecInput and velocity are the position and velocity vectors of the game object): // y = mx+c lowY = 2*vecInput.x + parag.rightYIntercept ; ... else { // y = mx+c // vecInput.y = 2(x) + RightYIntercept // (vecInput.y - RightYIntercept) / 2 = x; //if velocity.Y (positive) greater than velocity.X (negative) //pushing from bottom, so push right. if(velocity.y > -1*velocity.x) { vecInput = new Vector2((vecInput.y - parag.rightYIntercept)/2, vecInput.y); Debug.Log("adjusted rightwards"); } else { vecInput = new Vector2( vecInput.x, lowY); Debug.Log("adjusted downwards"); } }

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >