Search Results

Search found 19953 results on 799 pages for 'post get'.

Page 410/799 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • How to Detect Sprites in a SpriteSheet?

    - by IAE
    I'm currently writing a Sprite Sheet Unpacker such as Alferds Spritesheet Unpacker. Now, before this is sent to gamedev, this isn't necessarily about games. I would like to know how to detect a sprite within a spriitesheet, or more abstactly, a shape inside of an image. Given this sprite sheet: I want to detect and extract all individual sprites. I've followed the algorithm detailed in Alferd's Blog Post which goes like: Determine predominant color and dub it the BackgroundColor Iterate over each pixel and check ColorAtXY == BackgroundColor If false, we've found a sprite. Keep going right until we find a BackgroundColor again, backtrack one, go down and repeat until a BackgroundColor is reached. Create a box from location to ending location. Repeat this until all sprites are boxed up. Combined overlapping boxes (or within a very short distance) The resulting non-overlapping boxes should contain the sprite. This implementation is fine, especially for small sprite sheets. However, I find the performance too poor for larger sprite sheets and I would like to know what algorithms or techniques can be leveraged to increase the finding of sprites. A second implementation I considered, but have not tested yet, is to find the first pixel, then use a backtracking algorithm to find every connected pixel. This should find a contiguous sprite (breaks down if the sprite is something like an explosion where particles are no longer part of the main sprite). The cool thing is that I can immediately remove a detected sprite from the sprite sheet. Any other suggestions?

    Read the article

  • chunked response in nginx not working

    - by Dean Hiller
    I ran into this post which shows my problem EXACTLY http://nginx.org/en/docs/faq/chunked_encoding_from_backend.html BUT browsers are using http 1.1 these days so I really don't understand. Our backend is the playframework and I don't mind fixing it but I don't really understand what is not working ESPECIALLY since firefox, safari, chrome ALL download the response just fine with no problems. ONLY when we stick NGINX in the middle do things break and we end up with extra data in our json responses. Any idea how to fix this? as the doc above just seems wrong since we are now on later versions of http PLUS the browsers seem to work just fine. thanks, Dean

    Read the article

  • Problems with Adobe Flash

    - by Georg Scholz
    I'm running Windows 7, 64-Bit on a Core-i5 HP Notebook. For approximately the last 3 months, I've had major problems with Adobe Flash. Flash has been uninstalled and installed again multiple times. No changes. Here is what happens in different browsers: (All browsers are in the latest version as of this post) Firefox: Doesn't work at all. This bothers me most, since FF is my favorite browser; I'm using a lot of Plugins. Have tried to de-activate all Plugins, but no change. IE and Chrome: Works, but most pages with Flash are stuck during page load for ~30 seconds. After that, everything is fine. Opera: No problems, everything working as it should. Strange, eh? Any help is highly appreciated!

    Read the article

  • SQL – Crossword Puzzle Based on Course Building Successful High Traffic Profitable Blog

    - by Pinal Dave
    Do you like Crossword Puzzles? I personally love it. Everytime I open the newspaper, I try to resolve at least one crossword or sudoku. It is just fun to tease a brain little and stretch its limits. Regular readers of the blogs are aware that I have recently published two courses on how to build successful high traffic profitable blog. Here are the links to watch both the courses: Course 1, Course 2. Do watch them in order as both the courses have unique content, which can help you build a better blog. On my birthday July 30th, there was an interesting blog post posted on Pluralsight blog. It was a crossword build from my two courses. I encourage you try to solve the crossword which I have built. Giveaway: There is a cool gift for the winner – it is melting clock. Do not confuse this as a dummy or not working clock. This looks like melting but it always shows accurate time and it is perfectly balanced to hang off of any flat surface. How to Participate: Well, it is very simple, you just have to complete the crossword and send it to me at pinal at sqlauthority.com with all valid answers. The deadline is that you must send it before Monday August 5, 2013 or before the valid answer keys are posted on Pluralsight blog. Hints: Though the crossword is very easy and intuitive, if you ever get stuck anywhere here are two hints: Hint 1, Hint 2. Login to Pluralsight courses and watch both the courses. Watching the course will not only help you to easily complete crossword but there are hidden gems and secrets to build a high traffic profitable blog. Here is the link to download the crossword: Download Crossword. Alternatively you can download the image displayed below and print it as well.   Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Blogging

    Read the article

  • SQL SERVER – What is the Maximum Relational Database Size Supported by Single Instance?

    - by Pinal Dave
    I often get asked following question? “How much data SQL Server can handle?” Every single time when I get this question – I ask back following question - “How much data your storage system can handle?” The reason I ask this question back is because in reality for enterprise systems the limitation of storage is no more an issue. The Matter of the fact most of the database is now a days limited by the size of the storage system. SQL Server is enterprise system and it is very mature product. Even though if you still want to know what is the actual limit here is the answer. SQL Server 2008R2, 2012 and 2014 have maximum capacity of 524 PB (Petabyte) in the Enterprise, BI and Standard edition. SQL Server Express has a limitation of 10 GB due to its nature. I guess, now when you look at my question it will make sense that it is all depending on the size of your storage system. I personally believe at this point of time 524 PB is quite a huge data, but we never know after 10 years when we read this blog post, we all may think what was I thinking actually. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • JQuery / JSON + .Net Service Layer - to WCF or Not to WCF?

    - by hanzolo
    I Recently had a discussion with a colleague of mine about the pros / cons of WCF. He mentioned about how much code is generated to support WCF, and also the overhead required. It was mentioned that a simple jQuery /Ajax post to a .aspx page (or a handler for that matter) that returns JSON would work more efficiently and takes much less code to implement. I am also aware of the new WCF Web API and feel that technology may solve the "bloated"-ness required in attaining a proxy etc... by just outputting JSON. So when developing a relational DB (MSSQL) storage model, with a fairly complex Business Layer (C#) and Data Access Layer (EntityFW).. what's a good technology for creating a "service layer" which will spit out View Models represented in JSON, with a CQRS(Command Query..) approach in mind.. The app would use the service layer to support it's required UI, as well as provide an available subset of services (outputting JSON data) for service subscribers.. In other words an admin panel to support the admin UI, and service endpoints that return JSON to access the configurations made from the administration UI. What are some potential technologies to use as the transport / communication layer. I'd like to use a pure RESTful approach, but am not against doing some URL rewriting with IIS. Obviously some of the available technologies are: WCF WCF Web API (should this even be separate?) Straight request / response (query string to .aspx / handler) Would using MVC .Net solve this entire problem? maybe their single page app approach? any suggestions / feedback from developing this type of application? Thanks,

    Read the article

  • Why C# does not support multiple inheritance?

    - by Jalpesh P. Vadgama
    Yesterday, One of my friend Dharmendra ask me that why C# does not support multiple inheritance. This is question most of the people ask every time. So I thought it will be good to write a blog post about it. So why it does not support multiple inheritance? I tried to dig into the problem and I have found the some of good links from C# team from Microsoft for why it’s not supported in it. Following is a link for it. http://blogs.msdn.com/b/csharpfaq/archive/2004/03/07/85562.aspx Also, I was giving some of the example to my friend Dharmendra where multiple inheritance can be a problem.The problem is called the diamond problem. Let me explain a bit. If you have class that is inherited from the more then one classes and If two classes have same signature function then for child class object, It is impossible to call specific parent class method. Here is the link that explains more about diamond problem. http://en.wikipedia.org/wiki/Diamond_problem Now of some of people could ask me then why its supporting same implementation with the interfaces. But for interface you can call that method explicitly that this is the method for the first interface and this the method for second interface. This is not possible with multiple inheritance. Following is a example how we can implement the multiple interface to a class and call the explicit method for particular interface. Multiple Inheritance in C# That’s it. Hope you like it. Stay tuned for more update..Till then happy programming.

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • bash code in rc.local not excuting after bootup

    - by mrTomahawk
    Does anyone know why a system would not execute the script code within rc.local on bootup? I have a post configuration bash script that I want to run after the initial install of VMware ESX (Red Hat), and for some reason it doesn't seem to execute. I have the setup to log its start of execution and even its progress so that I can see how far it gets in case it fails at some point, but even when I look at that log, I am finding that didn't even started the execution of the script code. I already checked to see that script has execution permissions (755), what else should I be looking at? Here is the first few lines of my code: #!/bin/sh echo >> /tmp/configLog "" echo >> /tmp/configLog "Entering maintenance mode"

    Read the article

  • What causes critical glib errors (when coding using messaging menu)?

    - by fluteflute
    If I run the python code below (almost entirely from this useful blog post) then I get three identical nasty looking error messages in the terminal. What might be causing them? I note the number (5857 in the example below) changes slightly on each run. What does this number signify? Is it a memory location or something similar? (messaging-menu.py:5857): GLib-GIO-CRITICAL **: g_dbus_method_invocation_return_dbus_error: assertion `error_name != NULL && g_dbus_is_name (error_name)' failed (messaging-menu.py:5857): GLib-GIO-CRITICAL **: g_dbus_method_invocation_return_dbus_error: assertion `error_name != NULL && g_dbus_is_name (error_name)' failed (messaging-menu.py:5857): GLib-GIO-CRITICAL **: g_dbus_method_invocation_return_dbus_error: assertion `error_name != NULL && g_dbus_is_name (error_name)' failed I'm running this on Natty, I should probably find out if I get the same errors in 10.10 though... import gtk def show_window_function(x, y): print x print y # get the indicate module, which does all the work import indicate # Create a server item mm = indicate.indicate_server_ref_default() # If someone clicks your server item in the MM, fire the server-display signal mm.connect("server-display", show_window_function) # Set the type of messages that your item uses. It's not at all clear which types # you're allowed to use, here. mm.set_type("message.im") # You must specify a .desktop file: this is where the MM gets the name of your # app from. mm.set_desktop_file("/usr/share/applications/nautilus.desktop") # Show the item in the MM. mm.show() # Create a source item mm_source = indicate.Indicator() # Again, it's not clear which subtypes you are allowed to use here. mm_source.set_property("subtype", "im") # "Sender" is the text that appears in the source item in the MM mm_source.set_property("sender", "Unread") # If someone clicks this source item in the MM, fire the user-display signal mm_source.connect("user-display", show_window_function) # Light up the messaging menu so that people know something has changed mm_source.set_property("draw-attention", "true") # Set the count of messages in this source. mm_source.set_property("count", "15") # If you prefer, you can set the time of the last message from this source, # rather than the count. (You can't set both.) This means that instead of a # message count, the MM will show "2m" or similar for the time since this # message arrived. # mm_source.set_property_time("time", time.time()) mm_source.show() gtk.main()

    Read the article

  • Automating and deploying new linux servers

    - by luckytaxi
    I'm in the process of developing a method to automate new virtual machines into my environment. 90% of our machines are virtual but the process is similar for both physical and vmware based images. What I do now is I use cobbler to install the base OS. The kickstart script has post hooks to modify the yum repo and installs puppet and func. Once the servers are running, I manually add them into nagios and sign the certificate via the puppetmaster. I've since migrated most of the resources to use mysql as the backend. I wanted to see what others are doing and my goal for 2011 is to have puppet inventory the hardware into mysql, and somehow i'll script a python script to have nagios grab the info and automatically add it for monitoring purposes. It's kind of tedious to have to add each new server into nagios, puppet's dashboard, munin, etc...

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • Can I pin a cmd/batch file to command prompt on the Windows 7 taskbar?

    - by eidylon
    Hey all, just wanted to ask quick if anyone knows how to do this. If not I'll post it on connect.microsoft.com as a suggestion. With Explorer pinned to the taskbar you can pin folders to Explorer, so they are in the jumplist for the explorer shortcut. I have a shortcut to CMD pinned to my taskbar and would like to be able to do the same concept, ... that is to be able to pin a bat/cmd file to the CMD shortcut, so in the CMD jumplist, I could have quick access to a couple batch files I use routinely. Anyone know a way to do this? I tried just dragging the batch file to the CMD shortcut on the taskbar like how you do with folders and Explorer, but that did not work.

    Read the article

  • Is there a convenient way to manually copy the Log Shipping *.trn files from one SQL 2008 server to

    - by Rick
    We have a remote SQL 2008 server (ServerB) that needs to keep a warm (15 minute interval is OK) copy of production data (ServerA). ServerA is also SQL 2008. Log Shipping looks like it will do the job. We can only get to the destination ServerB with remote desktop. Is there a way to set this up when we can't get to both servers from one Management Studio? We want to be able to temporarily (until a VPN is setup between our network and the ServerB network) manually export a small .trn file, copy it via remote desktop to ServerB and then manually import those transactions from the .trn file. My supervisor says he saw a post saying this is possible. We were just trying to avoid doing a full database backup and copying that every time. Thanks in advance for any suggestions on this.

    Read the article

  • Links not opening in Windows 8 applications

    - by Martin Brown
    I have just upgraded my laptop from Windows 7 Ultimate with IE9 to Windows 8 with IE 10. When ever I try to open a web link in an application nothing happens. For example in the Windows 8 mail application it makes the email address in an email look like a link. When I click on these with the mouse the link changes to look like I have clicked on it, but aside from that nothing happens. I have tried making sure that IE 10 is my default web browser and been into IE10's settings and associated it with all the usual file types, but still no success. Has anyone got any ideas what might be causing this? I have a feeling this started when the Browser Choice update ran. I couldn't select which browser to install in that either. The install buttons just flashed when I clicked them and then nothing. PS I would have added an IE10 tag to this post, but apparently I don't have the reputation required to do so.

    Read the article

  • Video for an ads-driven web-site

    - by AntonAL
    I have a website, wich i will fill with a bunch of useful videos. I've implemented an ads rotation engine for articles and will do so for videos. The next milestone is to decide, how video will be integrated. They are two ways: To host videos myself. Pros: complete freedom. Cons: need tens of gigabytes of storage; support for multiple formats to be crossbrowser and crossdevice. Use Youtube. Pros: Very simple to use; nothing to do. What are pros and cons for each way ? Some questions for YouTube: Will i be able to control playback of YouTube-embedded video to make post-rolls ? What is ranking impact on my web-site, when most of pages will refer to YouTube ? Will, say, iPad play video, embedded via YouTube's iframe ? Does relying entirely on YouTube have a long-term perspective for a web-site, that should bring money ?

    Read the article

  • Why is the root partition on my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • LASTDATE dates arguments and upcoming events #dax #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Recently I had to write a DAX formula containing a LASTDATE within the logical condition of a FILTER: I found that its behavior was not the one I expected and I further investigated. At the end, I wrote my findings in this article on SQLBI, which can be applied to any Time Intelligence function with a <dates> argument.The key point is that when you write LASTDATE( table[column] )in reality you obtain something like LASTDATE( CALCULATETABLE( VALUES( table[column] ) ) )which converts an existing row context into a filter context.Thus, if you have something like FILTER( table, table[column] = LASTDATE( table[column] ) the FILTER will return all the rows of table, whereas you probably want to use FILTER( table, table[column] = LASTDATE( VALUES( table[column] ) ) )so that the existing filter context before executing FILTER is used to get the result from VALUES( table[column] ), avoiding the automatic expansion that would include a CALCULATETABLE that would hide the existing filter context.If after reading the article you want to get more insights, read the Jeffrey Wang's post here.In these days I'm speaking at SQLRally Nordic 2012 in Copenhagen and I will be in Cologne (Germany) next week for a SSAS Tabular Workshop, whereas Alberto will teach the same workshop in Amsterdam one week later. Both workshops still have seats available and the Amsterdam's one is still in early bird discount until October 3rd!Then, in November I expect to meet many blog readers at PASS Summit 2012 in Seattle and I hope to find the time to write other article on interesting things on Tabular and PowerPivot. Stay tuned!

    Read the article

  • Autocomplete in Silverlight with Visual Studio 2010

    - by Sayre Collado
    Last week I keep searching on how to use the autocomplete in silverligth with visual studio 2010 but most of the examples that I find they are using a textbox or combobox for the autocomplete. I tried to study those examples and apply to the single autocomplete from tools on my silverlight project. And now this is the result. I will use a database again from my previous post (Silverlight Simple DataBinding in DataGrid) to show how the autocomplete works with database. This is the output: First, this is the setup for my autocomplete: //The tags for autocompletebox on XAML Second, my simple snippets: //Event for the autocomplete to send a text string to my function private void autoCompleteBox1_KeyUp(object sender, KeyEventArgs e) { autoCompleteBox1.Populating += (s, args) => { args.Cancel = true; var c = new Service1Client(); c.GetListByNameCompleted +=new EventHandler(c_GetListByNameCompleted); c.GetListByNameAsync(autoCompleteBox1.Text); }; } //Getting result from database void c_GetListByNameCompleted(object sender, GetListByNameCompletedEventArgs e) { autoCompleteBox1.ItemsSource = e.Result; autoCompleteBox1.PopulateComplete(); } The snippets above will show on how to use the autocompleteBox using the data from database that bind in DataGrid. But what if we want to show the result on DataGrid while the autocomplete changing the items source? Ok just add one line to c_GetListByNameCompleted void c_GetListByNameCompleted(object sender, GetListByNameCompletedEventArgs e) { autoCompleteBox1.ItemsSource = e.Result; autoCompleteBox1.PopulateComplete(); dataGrid1.ItemsSource = e.Result; }

    Read the article

  • Sorry. Not Much Happened Today!

    - by steve.diamond
    And THAT blog headline is dedicated to Seth Godin, who recently wrote that unlike its print brethren, digital media outlets aren't burdened with having to make their articles long enough to match the number of surrounding ad pages. He states that just because you CAN write more doesn't mean you SHOULD. Well, you don't have to tell me that twice. So to continue my rambling entry today, I'd suggest you read this post by Donal Daly on 10 steps to intelligent Social CRM for Sales. No seriously, read it. It's almost like a Groundswell Cliff Notes for sales people. I particularly love his third point. Of course I haven't "gotten" it yet, but I've got a whole life time, for crying out loud. Seriously, this is a great read and a fast one. And finally, in the department of longer reads, a thanks and shout out to Paul Greenberg for mentioning Oracle's new iPad app for Siebel CRM in his ZDNet blog. Hey, I warned you...not much happened today. Per se!

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by [email protected]
    By Brian Dayton on April 12, 2010 11:15 AM Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: · "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." · "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." · "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.

    Read the article

  • Seeking .htaccess help: Converting multiple subdomains (both HTTP and HTTPS) to www.domain.com using .htaccess

    - by Joshua Dorkin
    I've been trying to get an answer to this question on other forums (the folks at SuperUser thought this was the place I needed to post) and via my connections, but I haven't gotten very far. Hopefully you guys can help me find an answer. I've got a dozen old subdomains that have been indexed by Google. These have been indexed as both HTTP AND HTTPS. I've managed to redirect all the subdomains properly, provided they are not HTTPS, but can't get any of the HTTPS subdomains to property redirect. Here's the code I'm using: RewriteCond %{HTTP_HOST} ^subdomain1.mysite.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^subdomain2.mysite.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^subdomain3.mysite.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] This works great until someone goes to: https://subdomain2.mysite.com$ which is not redirected back to http://www.mysite.com$ How can I get this to work? Additionally, I'm guessing there is an easier way to make it happen than setting up a dozen pairs of RewriteCond/RewriteRule? Is there any way to do this in just a few lines, including one where I list all the subdomains? I'd actually also like to redirect everything on https://www.mysite.com$ to http://www.mysite.com$ except for 3 folders. These are mysite.com/secure, mysite.com/store, mysite.com/user. Is there any good way to add this to the .htaccess file?

    Read the article

  • Siebel BIP Integration

    - by Tim Dexter
    This post is more of a bookmark for me so that I stop bugging the brown stuff out of the John the Siebel-BIP product manager. I have had multiple customers over the past two weeks asking for help around the integration. What's its capable of? How can I allow my users to click a button to run a BIP report? How can I kick off a report from a Siebel workflow? Start right here - this is a great white paper explaining whats now available with the integration using, the Siebel Report Business Service. Once you have consumed that from start to finish. Get on over to Oracle support and look for the following note that has code samples and lots of other good stuff! Siebel BI Publisher Reports Business Service (8.1.1.7+) [ID 1425724.1] The Reports Business Service enables BI Publisher reports to be executed from the Siebel application via a Workflow Process, or through scripting. The report is generated in the background by connecting to the BI Publisher server. The report output is stored in the Siebel File System and accessed from the My BI Publisher Reports view. Alternatively using appropriate methods, the report can be attached to an entity or sent to a particular delivery channel.

    Read the article

  • Package manager borked with gforge

    - by Leif Andersen
    I've been having a problem with the package manager. I seemed to have installed gforge, partially, but it keeps giving me errors whenever I install something. (Note that the thing I'm trying to install actually does get installed, but there is always an error returned). Here it is: Creating /etc/gforge/httpd.conf Creating /etc/gforge/httpd.secrets Creating /etc/gforge/local.inc Creating other includes invoke-rc.d: unknown initscript, /etc/init.d/postgresql-8.4 not found. dpkg: error processing gforge-db-postgresql (--configure): subprocess installed post-installation script returned error exit status 100 Errors were encountered while processing: gforge-db-postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) When I try to remove it with: sudo apt-get purge gforge-common I get this: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: gforge-common* gforge-db-postgresql* 0 upgraded, 0 newly installed, 2 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 5,853kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 717305 files and directories currently installed.) Removing gforge-db-postgresql ... Replacing config file /etc/postgresql/8.4/main/pg_hba.conf with new version invoke-rc.d: unknown initscript, /etc/init.d/postgresql-8.4 not found. dpkg: error processing gforge-db-postgresql (--purge): subprocess installed pre-removal script returned error exit status 100 Removing gforge-common ... Purging configuration files for gforge-common ... Processing triggers for man-db ... Errors were encountered while processing: gforge-db-postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) And it complains until I do a: sudo apt-get install -f At which point gforge is re-installed. I'm out of ideas, does anyone else have any other ideas with what might be wrong, and more importantly, how I can fix it? Thank you.

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >