Search Results

Search found 18132 results on 726 pages for 'connection timeout'.

Page 410/726 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • WebCenter Sites 11gR1 Bundled Patch 1 is now available

    - by R.Hunter
    There is a new patch available for WebCenter Sites - 11gR1 Bundled Patch 1. The download links can be obtained from the WebCenter Sites Download page. Some of the highlights of WebCenter Sites 11gR1 Bundled Patch 1 are listed below: - UI Customization support  - A new developer’s guide is available for use in customizing the Contributor UI. Customizable UI components include the Dashboard, search views, tools bars, menus, and asset-forms. In addition, global or site specific configuration properties can be specified for controlling what is displayed in the UI. - Localization support – The contributor UI is localized for the following languages: French, German, Italian, Spanish, Brazilian Portuguese, Japanese, Korean, Simplified &Traditional Chinese - Developer tools (CSDT) now supports connection to a remote Sites server- Security updates including a request authentication filter to prevent CSRF attacks, REST API updates, and more.- Session replication support in the management user interfaces- Bug fixes Please refer to the release notes and documentation for more information.

    Read the article

  • How do I control remote Windows 7 machine? VNC/RDP/?

    - by artfulrobot
    I have a Windows 7 laptop that's going out and about and I'd like to be able to admin' it from my Ubuntu (precise) desktop. While the laptop is in my office I can use Gnome's "Remote Desktop Viewer" (Vinagre) to connect via RDP. However, when it's in the office I don't really need RDP of course! And when it's out of the office I cannot control the routers/firewalls along the way, so I won't know the IP and cannot set up port forwards. I'm looking for a Ubuntu compatible way to achieve this. Nb. I can set up port forwards at the my office end, so a service on the laptop could try a "backwards" connection to my client.

    Read the article

  • Forward TCP Connections with Iptables

    - by opc0de
    I receive connections to my server from several ip addresses I want to route these connections just like rinetd does but based on the ip the connection is coming from to connect to a specified host. Just like this: IP 10.10.12.1 => CONNECTS TO MY SERVER => MY SERVER REDIRECTS IT TO 82.12.12.1 IP 10.10.12.2 => CONNECTS TO MY SERVER => MY SERVER REDIRECTS IT TO 81.121.12.10 etc Is it possible or do I need to write my own daemon to achieve this functionality ?

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • Windows 7 mapped drive kicking off OS X users

    - by Collin White
    I've mapped a network drive on my Windows 7 PC at my office. The windows machine has a few TB of storage that is being accessed by my development team (all running mac os 10.7). The share seems to work fine for a little while but will timeout and kick the mac users off and sometimes disallows a connection on the next attempt. Restarting the windows machine fixes the problem. I've tried this tutorial as well as setting the maximum session length in the Local Security Policy section to 99999 (I discovered 0 did not mean unlimited, only a 'reasonable ammount of time') anyway, the setting is now for ~208 days which is sufficient (see attached). I'm having trouble debugging this in general so if anyone has some pointers I'm all ears. This is a intermittent issue which in my opinion are the hardest kinds to debug. If anyone knows of how I might monitor connections from the PC that would also be pretty cool. Previously the files were hosted on a mac mini and everything was working just fine (the mini just didn't have the ability for the storage capacity we needed) so I believe it is some windows setting that is kicking users off. Anyway, thanks for reading.

    Read the article

  • Extremely high mysqld CPU usage with no active queries

    - by RadarNyan
    I have a VPS running Ubuntu 12.04 LTS with LEMP stack, followed the guide from Linode Library (since I'm using a Linode) to setup, and everything worked fine until now. I don't know what's wrong, but my CPU usage just goes up since a week ago. Today things getting really bad - I got 74% CPU usage so I went check and found that mysqld taking too much CPU usage (somewhere around 30% ~ 80%) So I did some Google Search, tried disable InnoDB, restart mysql, reset ntp / system clock (Isn't this bug supposed to happen more than a year ago?!) and reboot my VPS, nothing helped. Even with mysql processlist empty, I still get mysqld CPU usage very high. I don't know what I missed and have totally no idea, any advice would be appreciated. Thanks in advance. Update: I got these from running "strace mysqld" write(2, "InnoDB: Unable to lock ./ibdata1"..., 44) = 44 write(2, "InnoDB: Check that you do not al"..., 115) = 115 select(0, NULL, NULL, NULL, {1, 0}^[[A^[[A) = 0 (Timeout) fcntl64(3, F_SETLK64, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}, 0xbfa496f8) = -1 EAGAIN (Resource temporarily unavailable) hum... I did tried to disable InnoDB and it didn't fix this problem. Any idea? Update2: # ps -e | grep mysqld 13099 ? 00:00:20 mysqld then use "strace -p 13099", the following lines appears repeatedly: fcntl64(12, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, NULL}, [2]) = 14 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(14, {sa_family=AF_FILE, path="/var/run/mysqld/mysqld.sock"}, [30]) = 0 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0", 8) = 0 fcntl64(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0xb786a584, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xb786a580, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0xb7869998, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=12, revents=POLLIN}]) er... now I totally don't get it x_x help

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • BlueStacks Android App Player Now Available for Macs

    - by Jason Fitzpatrick
    Last year we showed you how to setup BlueStacks on your Windows machine in order to enjoy Android apps on your PC desktop; now BlueStacks is available for Mac OS X with that same cross-platform Android goodness. The Mac version functions much the same as the PC version, if you’re interested in the Mac version be sure to check out our detailed guide to setting up the PC version. BlueStacks for Mac [via TUAW] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Broken Gnome panel after Ubuntu Update

    - by Asaf
    Like every single update on Ubuntu system, my graphical system is completely broken after updating to version 11.10 http://i.imgur.com/YIHfA.png As shown in the image, the date is in the middle of the panel, There are many icons missing (network connection icon, skype) There's no right click menu when I try to right click on the panel and many programs don't have their menus (file,edit,view... those things). How do I fix this? I have gnome-panel, I don't want Unity and I'd rather install windows 95 before I switch to it.

    Read the article

  • Need instructions how to create wpa_supplicant.conf and add fast_reauth=0 to it // WPA 2 Enterprise & frequent wlan disconnects

    - by nutty about natty
    Like many other natty users on a university / academic network, I'm experiencing annoying frequent disconnects / hangs / delays. See, for instance: https://bugs.launchpad.net/ubuntu/+source/wpasupplicant/+bug/429370 I would like to learn how to add fast_reauth=0 to the wpa_supplicant.conf file. This file, it seems, does not exit by default, and needs to be manually created first: http://w1.fi/gitweb/gitweb.cgi?p=hostap.git;a=blob_plain;f=wpa_supplicant/README [quote] You will need to make a configuration file, e.g., /etc/wpa_supplicant.conf, with network configuration for the networks you are going to use. [unquote] Further, I installed wpa_gui which probably needs to be launched with parameters, else it's pretty blank... What I'm hoping for is this: That creating a wpa_supplicant.conf file with fast_reauth=0 in it, saving it to the relevant path, will work and make my uni wireless (more or even completely) stable. I read mixed reviews about wicd (as an alternative to the network manager). Also note that on my basic wlan at home (with bog-standard wpa encryption) the connection is stable. Thanks!

    Read the article

  • Google webmaster Verification failed.

    - by KMC
    I have a site created by Ruby on Rails. I had verified against Google Webmaster Tool some months ago, which was successful. One day webmaster starts giving me Re-verification fails. I tried again to verify my site using Meta tags and HTML files. But I kept having "Verification failed. The connection to your server timed out." Since then, Google stop crawling my site's content - though, somehow google still crawl my PDF contents on my site.

    Read the article

  • Reverse X11 forwarding

    - by Oli
    I was playing with my phone (that runs a Linux/X stack) last night and I managed to ssh into my desktop and run an application and have it show up on my phone. It was awesome. Today I'd like to sort of do the opposite. I want to view an application running on my phone on my PC. I could install a SSH server on my phone but I frankly don't fancy that purely for security reasons. I want this to be initiated from my phone. Is there a way to connect from my phone and tunnel the PC's X connection back to the phone and then run an application on the phone that show on the PC?

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • How to Banish Duplicate Photos with VisiPic

    - by Jason Fitzpatrick
    You meant well, you intended to be a good file custodian, but somewhere along the way things got out of hand and you’ve got duplicate photos galore. Don’t be afraid to delete them and lose important photos, read on as we show you how to clean safely. Deleting duplicate files, especially important ones like personal photos, makes a lot of people quite anxious (and rightfully so). Nobody wants to be the one to realize that they deleted all the photos of their child’s first birthday party during a hard drive purge gone wrong. In this tutorial we’re going to show you how to go beyond the limited reach of  tools which simply compare file names and file sizes. Instead we’ll be using a program that combines that kind of comparison with actual image analysis to help you weed out not just perfect 1:1 file duplicates but also those piles of resized for email images, cropped images, and other modified images that might be cluttering up your hard drive. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • How to configure VNC on a remote Ubuntu server?

    - by Jason H.
    I am currently in the process of creating a Ubuntu server with Rackspace cloud and I am trying to configure this server for VNC over SSH on my MacBook Air. Here is a summary of what I am trying to accomplish. Server Details: Ubuntu 12.04 Hosted by RackSpace (no physical access) Need to run Gnome3 VNC connection must be over SSH Any guides or assistance would be great, I have installed Gnome3 and Vino for the VNC-server. I'm just not sure how to configure VNC properly. I've looked online but I'm stuck at the VNC-server portion.

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "[email protected]" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "[email protected]" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • Fusion CRM Data Integration and Migration from Conemis (D)

    - by Richard Lefebvre
    Conemis Data Integration Tools edited for Oracle Fusion CRM offers easy-to-use and pre-configured tools for data integration, data quality, and migration of data from Oracle CRM on Demand and third-party applications to Oracle Fusion CRM Conemis solution includes: Pressure Fueling of data for Fusion CRM Migration covered from legacy to Fusion CRM Data Quality in migration and integration Intuitive Data Housekeeping for IT and Sales Backups of Fusion CRM environments Conemis's solution benefits include Fusion CRM integrated out-of-the-box, connection to other applications, ready-made data mapping, instant availability without installation, fully configurable, shared use in integration expert groups, one GUI for several environments/pods, reduced costs & risks in migration projects, etc. Conemis AG, a German-based data integration company founded in 2009, offers Software and services solution and expertize for Oracle CRM products's data migration and integration. For more details, please contact Dr. Daniel Rolli ([email protected]) www.conemis.com.

    Read the article

  • After upgrading to Ubuntu 12.04, my broadcom wireless sta driver is not working

    - by Nuwan Aluwihare
    I upgraded my HP DV5-1000US Laptop to Ubuntu 12.04. But I cannot get it connected to the Wireless. I went through the settings, Hardware, Additional Drivers and it says that "No proprietary drivers are in use on this system". Also it says "Broadcom STA Wireless Driver" is not activated. I tried to activate it after got my laptop connected to the net through a wired connection, but it says "Sorry, installation of this driver failed. Please have a look at the log file for details: /var/log/jockey.log"

    Read the article

  • Network manager indicator missing

    - by Jarmo
    I recently upgraded from 11.10 to 12.04. My first attempt failed, and I received an error stating that not all of the required packages were downloaded. Before (successfully) attempting again, I noticed that there was no longer a networking indicator in the upper panel. The indicator did not reappear with the installation of 12.04. To be clear, my wireless connection has experienced no problems, despite the missing indicator. Here are the solutions that I have found which did not work for me: Editing /etc/NetworkManager/NetworkManager.conf and replacing [ifupdown] managed=false with =true. Reinstalling network-manager (via apt-get install --reinstall). I am currently running 12.04 on an Asus Eee PC 1005 HA, and I am new to seeking solutions through forums, so I apologize if I have neglected to provide some vital information about my hardware.

    Read the article

  • Dovecot authentification not working

    - by user1488723
    I run a Ubuntu 10.04 VPS with Postfix and Dovecot installed. For a while I had problems with the mailserver itself (Postfix) but now it runs ok. I can telnet into it from localhost (telnet localhost 25 while logged in) and Im blocked if I try to do it from the outside (telnet mail.example.org 25). This is as it should be according to my main.cf However when I try to log in using Dovecot (openssl s_client -connect mail.example.com:993) I'm allowed in but denied when trying to identify myself as a user: Excerpt from Dovecot log in: Key-Arg : None Start Time: 1341074622 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. When I continue and try to log in to a specific user with the command: A001 login user password I get: A001 NO [AUTHENTICATIONFAILED] Authentication failed. I've reset the password to ensure it is correct and I know the user (user) exists on the system. When I do /etc/init.d/dovecot reload I get: /etc/init.d/dovecot: 29: maildir:~/Maildir: not found * Reloading IMAP/POP3 mail server dovecot [ OK ] Could it be that the mailboxes isn't found? Postfix main.cf: home_mailbox = Maildir/ mailbox_command = recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_loglevel = 1 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $mydomain Dovecot.conf: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/Maildir auth_verbose = yes mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz0123456789 protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • Connect to MySQL on remote server from inside python script (DB API)

    - by Atul Kakrana
    Very recently I have started to write python scripts that need to connect few databases on mySQL server. The problem is that when I work from office my script works fine but running a script from my home while on office VPN generates connection error. I also noticed the mySQL client Squirrel also cannot connect from my home but works fine on Office computer. I think both are giving problem for the same reason. Do I need to create a ssh tunnel and forward the port? If yes how do I do it? mySQL is installed on server I have ssh access. Please help me on this AK

    Read the article

  • I have 20 Ubuntu 12.04 LTS machines Some are unable to network with other machines , they all have same workgroup viz. Ubuntu

    - by Gaurang Agrawal
    During installation I updated my workgroup to "Workgroup" , after installation I changed it to ubuntu as I was unable to access computers in network . What changes do I need to make in samba configuration ? I don't know if this is related , shared@shared:~$ testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[print$]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] workgroup = UBUNTU server string = %h server (Samba, Ubuntu) encrypt passwords = No map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = Enter\snew\s\spassword:* %n\n Retype\snew\s\spassword:* %n\n password\supdated\ssuccessfully . username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 name resolve order = bcast host dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers smbclient -L 192.168.1.108 Enter shared's password: Connection to 192.168.1.108 failed (Error NT_STATUS_HOST_UNREACHABLE)

    Read the article

  • How can I stop Ubuntu from automatically unmounting Samba shares?

    - by Billy ONeal
    I have some music files I'd like to listen to sitting on a Samba share. I added this share via the Ubuntu GUI (Places - Connect to server...), and everything worked just fine. However, despite the fact that my music file is playing from this location, after I've not touched the location using the Nautilus GUI, Ubuntu/GNOME decides that I'm not using the share anymore and terminates the connection. Thus, my music stops playing and Rhythmbox is unhappy with me. Simply clicking on the new shortcut the "Connect to server..." bit created for me immediately makes the files come back again and allows me to restart the music playing. How can I have Ubuntu not automatically dismount samba shares?

    Read the article

  • PHP script not automatically updating when moved to another server

    - by user32007
    A friend built a ranking system on his site and I am trying to host in on mine via WordPress and Go Daddy. It updates for him but when I load it to my site, it works for 6 hours, but as soon as the reload is supposed to occur, it errors and I get a 500 timeout error. His page is at: jeremynoeljohnson .com/yakezieclub My page is currently at http://sweatingthebigstuff.com/yakezieclub but when you ?reload=1 it will give the error. Any idea why this might be happening? Any settings that I might need to change? Here is the top of the index.php file. I'm not sure which part of any of it is messing up. I literally uploaded the same code as him. Here's the reload part: $cachefile = "rankings.html"; $daycachefile = "rankings_history.xml"; $cachetime = (60 * 60) * 6; // every 6 hours, the cache refreshes $daycachetime = (60 * 60) * 24; // every 24 hours, the history will be written to // - or whenever the page is requested after 24 hours has passed $writenewdata = false; if (!empty($_GET['reload'])) { if ($_GET['reload']== 1) { $cachetime = 1; } } if (!empty($_GET['reloadhistory'])) { if ($_GET['reloadhistory'] == 1) { $daycachetime = 1; $cachetime = 1; } } if (file_exists($daycachefile) && (time() - $daycachetime < filemtime($daycachefile))) { // Do nothing } else { $writenewdata = true; $cachetime = 1; } // Serve from the cache if it is younger than $cachetime if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) { include($cachefile); echo "<!-- Cached ".date('jS F Y H:i', filemtime($cachefile))." -->"; exit; } ob_start(); // start the output buffer ?>

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >