Search Results

Search found 22040 results on 882 pages for 'process improvement'.

Page 473/882 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • PHP Browser Game Question - Pretty General Language Suitability and Approach Question

    - by JimBadger
    I'm developing a browser game, using PHP, but I'm unsure if the way I'm going about doing it is to be encouraged anymore. It's basically one of those MMOs where you level up various buildings and what have you, but, you then commit some abstract fighting entity that the game gives you, to an automated battle with another player (producing a textual, but hopefully amusing and varied combat report). Basically, as soon as two players agree to fight, PHP functions on the "fight.php" page run queries against a huge MySQL database, looking up all sorts of complicated fight moves and outcomes. There are about three hundred thousand combinations of combat stance, attack, move and defensive stances, so obviously this is quite a resource hungry process, and, on the super cheapo hosted server I'm using for development, it rapidly runs out of memory. The PHP script for the fight logic currently has about a thousand lines of code in it, and I'd say it's about half-finished as I try to add a bit of AI into the fight script. Is there a better way to do something this massive than simply having some functions in a PHP file calling the MySQL Database? I taught myself a modicum of PHP a while ago, and most of the stuff I read online (ages ago) about similar games was all PHP-based. but a) am I right to be using PHP at all, and b) am I missing some clever way of doing things that will somehow reduce server resource requirements? I'd consider non PHP alternatives but, if PHP is suitable, I'd rather stick to that, so there's no overhead of learning something new. I think I'd bite that bullet if it's the best option for a better game, though.

    Read the article

  • Terribly slow Apache2 on VM Virtualbox

    - by cadavre
    I just launched VM Virtualbox with guest Ubuntu Server on host Windows 8. Both 64bit. Everything works perfectly fine. Maybe it's because I'm not using any X... Htop shows ~25% of memory usage, everything is fine, but not Apache2. Normally it's fine, but when I send request from my browser on host (networking mode set to Bridge mode), Apache2 is turning into 1-minute-long loading process with 100% CPU time. Any ideas how to debug it? Any ideas about solving this throat problem?

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Apache proxying to Unicorn server times out, how to avoid?

    - by Ian
    I have a Teambox installation running on Unicorn, and the latter sometimes times out after 30 seconds. The idea of this configuration would be for Apache to wait until the Unicorn master server sends a timeout, because if I'm not wrong, Unicorn will quit the timed-out worker process but spawn a new one to handle the same request. Is there a way to configure Apache to not timeout like the nginx configuration of timeout = 0? Thanks for the help! EDIT I found a way, though it doesn't really work as I expected. In the ProxyPass directive you have to specify a retry=0 option after the url: ProxyPass / http://url/ retry=0 It doesn't work if the url is a ProxyBalancer though.

    Read the article

  • Can I use Ubuntu One to sync data fiies between two remote computers

    - by Sleepy John
    I've got two computers, both running Ubuntu with files in their home folders sync'd in to Ubuntu One. I'd like to know if it's possible to make Ubuntu One automatically download data changes that have been uploaded automatically to Ubuntu One from one computer to the equivalent data file in the other. Clarifying a bit further, I've installed Red Notebook in both computers and so they each have their own /.rednotebook/data folder containing a series of .txt files corresponding to the monthly entries in each of them. These are sync'd to upload any changes to those .txt files to Ubuntu One. My question is can I, and if so how, do I make Ubuntu One automatically download and replace those .txt files in the other computer after they've been updated and uploaded from the first computer? I did labouriously manage to download all those text files which had been uploaded from the first computer, from Ubuntu One one-by-one to the second computer, but what I want to do is automate this process and that's where I'm stuck. I'm aware that things could get a bit complicated if both my computers were on-line at the same time and both were simultaneously making different Red Notebook entries, so that's not the scenario I'm trying to cover. All I want to achieve is that whatever updates to the files have been uploaded by one computer, will automatically be downloaded to the same-named files in the other computer as soon as that second computer appears on line and detects that Ubuntu One has matching but more recent sync'd files than the ones it's holding.

    Read the article

  • Desktop icons disappears when Nautilus is launched, until next boot

    - by Santosh
    What happens: When I log in into my Ubuntu everything is normal, I have some icons on my desktop and I can see them at this point of time. As soon as I click on the explorer (nautilus) on the Launcher bar, everything goes (disappers) and never comes. No matter how many time you click on the launcher, you can't open nautilus. I tried opening nautilus from the terminal, get the following: santosh@santosh:~$ nautilus Initializing nautilus-gdu extension ** (nautilus:2158): DEBUG: SyncDaemon already running, initializing SyncdaemonDaemon object Initializing nautilus-open-terminal extension (nautilus:2158): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed (nautilus:2158): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed Segmentation fault I suspect on "Segmentation fault" on the last line, whats that? I was amazed when I run this command in sudo.: santosh@santosh:~$ sudo nautilus Initializing nautilus-gdu extension ** (nautilus:2216): DEBUG: Syncdaemon not running, waiting for it to start in NameOwnerChanged Initializing nautilus-open-terminal extension (nautilus:2216): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed (nautilus:2216): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. As soon as I type sudo nautilus and hit enter, nautilus starts and the desktop background changes (to the default which ubuntu has). Don't know why but at this point as soon as I click on Desktop (from the left pane) then nautilus closes. Did anyone has same issue? I am corrently working with commandline to do my work and its a big pain. Additional Information: Another thing I have noticed that when I want to open the location of PDF file I am reading in document viewer (by clicking the folder icon). It gives error "Could not open the containing folder" and "Failed to execute child process "nautilus" (Permission denied)". Any idea?                

    Read the article

  • Activity monitor is unable to execute queries against server

    - by mika
    SQL Server Activity Monitor fails with an error dialog: TITLE: Microsoft SQL Server Management Studio The Activity Monitor is unable to execute queries against server [SERVER]. Activity Monitor for this instance will be placed into a paused state. Use the context menu in the overview pane to resume the Activity Monitor. ADDITIONAL INFORMATION: Unable to find SQL Server process ID [PID] on server [SERVER] (Microsoft.SqlServer.Management.ResourceMonitoring) I have this problem on SQL Server 2008 R2 x64 Developer Edition, but I think it is found in all 64bit systems using SQL Server 2008, under some yet unidentified conditions. There is a bug report on this in Microsoft Connect. It seems that the problem is not solved yet.

    Read the article

  • ADF Mobile Client is now Generally Available!

    - by joe.huang
    ADF Mobile Client is now generally available!  The press release went out this morning, and the ADF Mobile Client extensions can now be downloaded in the JDeveloper Update Center.  There is also a new Oracle Mobile Computing Strategy White Paper and Data Sheet available, for a high level overview of ADF Mobile. To get started with ADF Mobile Client development, please leverage the following resources: Oracle Technology Network ADF Mobile Landing Page: Review this page for all available resources for ADF Mobile development. Getting Start with ADF Mobile Client Demo: Short demo of the end-to-end development process. Tutorial for Mobile Application Development using ADF Mobile Client ADF Mobile Client Developer Guide ADF Mobile Client Samples: available in the JDeveloper Extension itself.  Located in <JDeveloper Install Location>/jdev/extensions/oracle.adfnmc.core/Samples directory.  Blogs will follow, describing each of the sample applications in more detail. Oracle Database Mobile Server: If database synchronization is needed, please follow this link to download/install Mobile Server. Leverage JDeveloper Forum for any ADF Mobile related questions. You will need the latest (11g Patch Set 3, or 11.1.1.4.0) version of JDeveloper to use this extension.  To download the ADF Mobile Client extension in JDeveloper, you would go to Help Menu, select “Check For Update”, and look for ADF Mobile Client extension in the Official Oracle Extensions and Updates center.  You can also directly download the extension from Oracle Technology Network. Check it out!  For any issues with accessing any of the links above, please contact me directly. Thanks, Joe Huang ([email protected])

    Read the article

  • OS X SMB Connection to Windows Server 2008 R2

    - by Tawm
    I support many Macs that connect to an SMB share on a Windows Server 2008 R2 box. Occasionally I find that the Mac will try to connect and fail. The connection process will skip asking for credentials and there are none stored in keychain's password section. The server logs will show that the user tried to authenticate with invalid credentials. There are also no lingering connections from the server's point of view. The work around that I've found is to use an invalid username or a username that isn't the user's in the connection string for SMB so smb://domain;user@server/share instead of 'smb://server/share' this will force it to use the username I specified which the client doesn't have anything stored for. So it will then pop up the login window where the user changes the username to the correct one and user her password to connect happily. Specific computer in question: 15" MBP running Snow Leopard (10.6.7 or 10.6.8)

    Read the article

  • cannot delete IPv6 default gateway

    - by NulledPointer
    The commands below should be pretty self-explanatory. Please note that the route for which i get failure is obtained by RA and has very less expiry ( e Flag in UDAe). @vm:~$ ip -6 route 2001:4860:4001:800::1002 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1003 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1005 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:803::100e via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 fd00:ffff:ffff:fff1::/64 dev eth1 proto kernel metric 256 expires 2592300sec fe80::/64 dev eth1 proto kernel metric 256 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto kernel metric 1024 expires 1776sec @vm:~$ @vm:~$ @vm:~$ @vm:~$ sudo route -6 delete default gw fe80::20c:29ff:fe87:f9e7 @vm:~$ ip -6 route 2001:4860:4001:800::1002 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1003 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1005 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:803::100e via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 fd00:ffff:ffff:fff1::/64 dev eth1 proto kernel metric 256 expires 2592279sec fe80::/64 dev eth1 proto kernel metric 256 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto kernel metric 1024 expires 1755sec @vm:~$ @vm:~$ @vm:~$ sudo route -6 delete ::/0 gw fe80::20c:29ff:fe87:f9e7 dev eth1 SIOCDELRT: No such process @vm:~$ @vm:~$ @vm:~$ route -n6 Kernel IPv6 routing table Destination Next Hop Flag Met Ref Use If 2001:4860:4001:800::1002/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:800::1003/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:800::1005/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:803::100e/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 fd00:ffff:ffff:fff1::/64 :: UAe 256 0 0 eth1 fe80::/64 :: U 256 0 0 eth1 ::/0 fe80::20c:29ff:fe87:f9e7 UGDAe 1024 0 0 eth1 ::/0 :: !n -1 1 349 lo ::1/128 :: Un 0 1 3 lo fd00:ffff:ffff:fff1:a00:27ff:fe7f:7245/128 :: Un 0 1 0 lo fd00:ffff:ffff:fff1:fce8:ce07:b9ea:389f/128 :: Un 0 1 0 lo fe80::a00:27ff:fe7f:7245/128 :: Un 0 1 0 lo ff00::/8 :: U 256 0 0 eth1 ::/0 :: !n -1 1 349 lo @vm:~$ UPDATE: Another question is whats the use of link local address as the default route?

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • No win 7 users available for login after dell datasafe factory reset

    - by user897052
    I created install discs using the Dell datasafe 2.0 backup utility in order to re-install windows on a friend's laptop (dell inspiron n5110). I ran the discs to do a factory reset. after the whole process, it booted, started loading windows 7, displayed the messages "setup is preparing your computer for first use" and "setup is checking video performance," and showed the login screen. However, there don't seem to be any active users on the machine - I opened a command prompt window to check the users on the machine. Using the command prompt (again, from the login window), i activated/enabled the administrator account, and even created another admin account, and upon logging in received several errors, couldn't load any mmc's, etc. any help would be appreciated.

    Read the article

  • Coldfusion deployment under Apache Tomcat with Virtual Hosts

    - by smalltiger85
    Hello, I'm looking for the proper way to share the Coldfusion engine for all my Virtual Hosts. This is how I have the virtual hosts configured on server.xml in the Tomcat conf: <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="cfusion/" reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true"/> </Host> <Host name="mysite1" appBase="webapps" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/mnt/webroot/mysite1" /> </Host> This is not valid for me because I'm instantiating one Coldfusion process for every virtual host, and it requires me to copy the WEB-INF folder in every site. My question is, is there any way to share the same Coldfusion instance for every virtual host, mantaining the sites webroot outside of the cfusion folder? I'm using Apache Tomcat 6 and Coldfusion Enterprise edition 8. Many thanks.

    Read the article

  • Getting Partial / No Redundancy on VM's created on latest datastore

    - by Germano
    Hi, First some background. I'm in the process of upgrading my ESX servers from 3.5 to vSphere 4 and so far I have setup the new vCenter Server. Before I start the upgrade of the ESX, I needed more storage so I created 3 datastores from available space on my Equallogic PS6000 which has been connected for a while so as far as connectivity, nothing has changed. but now here's my problem, I get a "Partial / No Redundancy" on any VM that I create in any of these new datastores. I can create VM's on any of the older datstores on LUN's from exactly the same Equallogic and it works fine, but not the new ones. Keep in mind that these new datastores are the only ones that were created under the new vCenter, so I believe it must have something to do with it. Is anyone aware of any issues about creating datastored using the new vCenter but on a 3.5 ESX host? ISCSI with QLogic QLE406x Thanks in advance for nay help. Germano

    Read the article

  • BizTalk - Removing BAM Activities and Views using bm.exe

    - by Stuart Brierley
    Originally posted on: http://geekswithblogs.net/StuartBrierley/archive/2013/10/16/biztalk---removing-bam-activities-and-views-using-bm.exe.aspxOn the project I am currently working on, we are making quite extensive use of BAM within our growing number of BizTalk applications, all of which are being deployed and undeployed using the excellent Deployment Framework for BizTalk 5.0.Recently I had an issue where problems on the build server had left the target development servers in a state where the BAM activities and views for a particular application were not being removed by the undeploy process and unfortunately the definition in the solution had changed meaning that I could not easily recreate the file from source control.  To get around this I used the bm.exe application from the command line to manually remove the problem BAM artifacts - bm.exe can be found at the following path:C:\Program Files (x86)\Microsoft BizTalk Server 2010\TrackingC:\Program Files (x86)\Microsoft BizTalk Server 2010\TrackingStep1 :Get the BAM Definition FileRun the following command to get the BAm definition file, containing the details of all the activities, views and alerts:bm.exe get-defxml -FileName:{Path and File Name Here}.xmlStep 2: Remove the BAM ArtifactsAt this stage I chose to manually remove each of my problem BAM activities and views using seperate command line calls.  By looking in the definition file I could see the names of the activities and views that I wanted to remove and then use the following commands to remove first the views and then the activities:bm.exe remove-view -name:{viewname}bm.exe remove-activity -name:{activityname}

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system. The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me. At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach. Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements. My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work. Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way. DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal. Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info: You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client. You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features. I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically. When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions: On the Choose the installation you want page of the installation wizard, I chose Server Farm. On the Server Type page, I chose Complete. At the end of the installation, I did not run the configuration wizard. Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end. It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective. I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed: Install SQL Server 2008 R2 to get a database engine instance installed. Run the SharePoint configuration wizard to set up the SharePoint databases. In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization. Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment. I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon. Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment. I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Installation stops with cmd.exe window on Laptop

    - by Saariko
    I am installing W7 Ent on an LG R580. I am working with a valid ISO (installs perfect on other systems). During the installation, before the installation window, the process hangs, and I get a black, cmd.exe screen with the following: Select Administrator: X:\windows\system32\cmd.exe Microsoft Windows [Version 6.1.7601] with a prompt for: X:\windows\system32 My keyboard at that time only prints capital letters with '^' before each. Only thing that I am able to do is reboot. In the bios, I tried to disable USB Legacy ( thinking the problem is with my DVD ) - did not help.

    Read the article

  • How to change MySQL data directory?

    - by Jonathan Frank
    I want to place my databases in another directory, so I can store them in an ESB (elastic block storage, just a fancy name for a virtualized harddisk) together with my web-apps and other persistent data. I have tried to walk through a tutorial at http://crashmag.net/change-the-default-mysql-data-directory-with-selinux-enabled. Everything seems fine until I type this command: # semanage fcontext -a -t mysqld_db_t "/srv/mysql(/.*)?" Then the command fails and tells me that mysqld_db_t is an invalid SELinux context even if the default MySQL data directory is labelled with this context. I am running Fedora 15 on Virtualbox (behaves like an ordinary x86-compatible box) and Amazon EC2 (based on Xen) so the tutorial should be compatible. It is also worth to mention that turning off SELinux globally or just for the MySQL process is not an option, because such a solution will decrease the security of the system if a hacker gains access to the system via the MySQL server. I have never seen this problem before I changed to the Redhat/Fedora architecture, so it could be a distribution specific issue. Any help is highly appreciated

    Read the article

  • Problem with shared ssh keys

    - by warren
    Following the process I've used in other environments, I've tried setting-up shared keys between my Mac and my CentOS 4 webserver. I've seen the same problem with my older Ubuntu 7.10 workstation trying to connect via keys to the same webserver. I have tried both dsa and rsa keytypes (sshkeygen -t <type>). The sshd_config file on my webserver seems to be allowing key-based logins: RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys And my .ssh/authorized_keys has my dsa and rsa keys added. Where should I be looking for what to change next to make key-based logins "Just Work™"? Is it related to the line, #UseDNS yes and sshd is trying to do a reverse-lookup on my IP, but cannot because it's NAT'd?

    Read the article

  • Windows Azure Learning Plan - Compute

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the "compute" function of Windows Azure, which includes Configuration Files, the Web Role, the Worker Role, and the VM Role. There is a general programming guide for Windows Azure that you can find here to help with the overall process.   Configuration Files Configuration Files define the environment for a Windows Azure application, similar to an ASP.NET application. This section explains how to work with these. General Introduction and Overview http://blogs.itmentors.com/bill/2009/11/04/configuration-files-and-windows-azure/ Service Definition File Schema http://msdn.microsoft.com/en-us/library/ee758711.aspx Service Configuration File Schema http://msdn.microsoft.com/en-us/library/ee758710.aspx  Windows Azure Web Role The Web Role runs code (such as ASP pages) that require a User Interface. Web Role "Boot Camp" Video  https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470854&CountryCode=US Web Role Deployment Checklist http://blogs.infragistics.com/blogs/anton_staykov/archive/2010/06/30/windows-azure-web-role-deployment-checklist.aspx  Using a Web Role as a Worker Role for Small Applications http://www.31a2ba2a-b718-11dc-8314-0800200c9a66.com/2010/12/how-to-combine-worker-and-web-role-in.html Windows Azure Worker Role  The Worker Role is used for code that does not require a direct User Interface. Worker Role "Boot Camp" Video https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470871&CountryCode=US Worker Role versus Web Roles http://msdn.microsoft.com/en-us/library/gg433012.aspx Deploying other applications (like Java) in a Windows Azure Worker Role http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx Windows Azure VM Role The Windows Azure VM Role is an Operating System-level mechanism for code deployment. VM Role Overview and Details  http://msdn.microsoft.com/en-us/library/gg465398.aspx  The proper use of the VM Role http://blogs.msdn.com/b/buckwoody/archive/2010/12/28/the-proper-use-of-the-vm-role-in-windows-azure.aspx

    Read the article

  • How to install Web Deployment Agent

    - by Jerry
    I am trying to setup the TFS automated deploys. But I keep getting the following error message when running the deploy. It appears that I have a service called "Web Management Service", but the error message says that I need "We Deploy Agent Service". I tried installing Web Deploy 2.0, but the server said that I already had this installed. What can I do to fix this problem? Error Code: ERROR_DESTINATION_NOT_REACHABLE Could not connect to the destination computer ("myServer"). On the destination computer, make sure that Web Deploy is installed and that the required process ("Web Deployment Agent Service") is started. --Update-- Looks like the Web Deployment agent is not installed by default. I had to re-install MSDeploy, select Change or Custom, then add the Web Deploy Agent service. Now the deploy works correctly.

    Read the article

  • Regarding Reinstalling PostgreSQL

    - by Vivalavista
    I was using PostgreSQL 8.4. I tried removing it through Synaptic Manager and then I tried to install 9.1, but I still version 8.4. I deleted all the files associated with postgresql. Now I am unable to install any version of PostgreSQL. When I try I get this error: Setting up postgresql-9.1 (9.1.3-1~lucid) ... .: 12: Can't open /usr/share/postgresql-common/maintscripts-functions dpkg: error processing postgresql-9.1 (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of postgresql: postgresql depends on postgresql-9.1; however: Package postgresql-9.1 is not configured yet. dpkg: error processing postgresql (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: postgresql-9.1 postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) Please tell me the way to remove postgres completely so I can install a fresh version.

    Read the article

  • I can't run uwsgi as normal user

    - by atomAltera
    I want to run uwsgi server as www user, but if I write: uwsgi --socket $SOCKET --chmod-socket 666 --pidfile $PIDFILE --daemonize $LOGFILE --chdir $CHDIR --pp $PYTHONPATH --module main --post-buffering 8192 --workers 1 --threads 10 --uid www --gid www A socket creation error occurs: Log: 1 *** Starting uWSGI 1.4.1 (64bit) on [Mon Dec 10 22:15:23 2012] *** 2 compiled with version: 4.4.5 on 17 November 2012 23:31:14 3 os: Linux-2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 4 nodename: autoblog 5 machine: x86_64 6 clock source: unix 7 pcre jit disabled 8 detected number of CPU cores: 2 9 current working directory: / 10 writing pidfile to /tmp/uwsgi_mysite.pid 11 detected binary path: /usr/local/bin/uwsgi 12 setgid() to 1002 13 set additional group 1004 (files) 14 setuid() to 1002 15 *** WARNING: you are running uWSGI without its master process manager *** 16 your memory page size is 4096 bytes 17 detected max file descriptor number: 1024 18 lock engine: pthread robust mutexes 19 unlink(): Operation not permitted [core/socket.c line 109] 20 bind(): Address already in use [core/socket.c line 141]

    Read the article

  • hMailServer Email + MX Records Configuration

    - by asn187
    Trying to make DNS changes to enable email to be sent using hMailServer. My mail server is on a separate machine with a separate IP Address. I have already added MyDomain.com and an email account I have create a MX Record with the mail server being mail.domain.com an a priority on 20. 1) But the question is how do I now link this MX record for the domain to my mail server/ mail server IP Address? 2) What changes are needed in hMailServer to complete the process and be able to send emails for the domain? 3) In Settings SMTP Delivery of email: What should my configuration here look like?

    Read the article

  • forward mails from sendmail + mimedefang on two unix boxes

    - by SWKK
    I am not a unix expert, more like a novice. I have one user account replicated on two unix boxes receiving same mails on both boxes for fail over. Sendmail has mimedefang utility running to process these mails. After all the processing is complete, more like weeding the spam, viruses etc. I need to forward these mails coming to the same account on both boxes to goto a central mailbox on MS Exchange for this account. Problem is I am using .forward file hence both unix boxes forward the mail after processing. I just want to be able to ignore one of the two mails. Has anyone tried something similar? any directions? Please?

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >