Search Results

Search found 5414 results on 217 pages for 'otn j master'.

Page 20/217 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Merge changes when a file on a branch has split into two files on the master

    - by carleeto
    This is basically the result of a massive class C on the master having been refactored down the line into two smaller classes, C1 and C2. C was then made a subclass of C2 and cut down to a skeletal version for backward compatibility. So from that point on, master contained C, C1 and C2. On that master commit git said C was renamed to C1. The branch was last updated before this happened. (All C++ code, if it helps to visualize the files involved) Obviously, when I tried a rebase of the branch onto master, there were conflicts that needed to be resolved. As usual, I used mergetool. So now the mergetool comes up with the following: On Local, I have the skeletal version of C. Base and Remote have a bunch of changes to C. Because the skeletal version of C exists on Local, I conclude that the changes from Base and Remote should actually go into C1, leaving C alone. My question is, how do I do this?

    Read the article

  • master to slave replication in mysql

    - by vijay
    i need master to slave replication in mysql so i am creating this procedure to change the master dynamically by procedure delimiter // CREATE PROCEDURE change_master( in host_ip varchar(50)) begin stop slave; CHANGE MASTER TO MASTER_HOST = host_ip, MASTER_PORT=3306, MASTER_USER='replication', MASTER_PASSWORD='slave'; start slave; end; // but i am getting a error. ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'host_ip, MASTER_PORT=3306, MASTER_USER='replication', MASTER_PASSWORD='slave'; s' at line 4 if i left it blank then fine eg. CHANGE MASTER TO MASTER_HOST = '', MASTER_PORT=3306, MASTER_USER='replication', MASTER_PASSWORD='slave'; i tried many time but in this query i am not able to use any variable why? if u know help me. thanks .

    Read the article

  • Foraward SNMP requests from Agentx Master to Agentx Subagent

    - by Nadia
    I am running an agentx master and an agentx subagent on linux. When I run snmpget on a default MIB i.e. sysdescr.0 it returns fine, but when I request for a MIB that was registered through the agentx subagent it timesout. It appears that the master receives the GET request but does not forward on to the agentx subagent. The MIB is registered successfully but when master agentx receives the GET request it saying "Sending 60 bytes to UDP: unknown". It can't find the location to forward to. Am I missing a configuration of some sort on the subagent side? How does the master know who is suppose to receive the requests?

    Read the article

  • access dropdown control from the master page to the content page using asp.net

    - by Isha Jain
    i have a master page and the content pages in the master page i have the textbox and dropdown the value in the dropdown may vary according to the content pages e.g for one content page the dropdown may contain branchname, city, address and let for other content page under same master page the dropdown may have values like Contactnumber, EmailID, ........... ......... etc..... so please help me to how can i bind that dropdown from my content page thanks.

    Read the article

  • jquery master page problem

    - by boraer
    Hi everbody, i am developing an asp.net project and i use jquery with it but when I use masterpage with content page. My jquery code does not working but if i use in a normal page without master jquery work efficiently. ' I use this in the master page for resolation. In my code when click a button. a timer starts and button disabled until timer finishes Thats all but not working with master page

    Read the article

  • Getting access to a custom Master page from a user control

    - by Bernard
    Hi We have created a Master page that inherits off the asp.net Master class. We have also got ui controls that inherit off the standard asp.net ui control class. Our Master page has a public member variable. We need to be able to access that member variable from the ui controls that we use. However we can't seem to get at it? Is it our architecture that is wrong? Or the idea itself - user control getting acces to Master page variables?

    Read the article

  • How to map a virtual directory to a website in VS?

    - by salvationishere
    I am developing a C# VS 2008 website, trying to add a Master file. I created a virtual directory in IIS housing the "Master" folder, containing the Master files. Now how do I reference these files from my website in VS? One problem is I do not know where I need to publish this Master folder to. Other problem is I do not know how to reference this Master file in my aspx Page directive. FYI, this master folder is physically located outside of c:\inetpub\ in a totally separate file location. Is this a problem?

    Read the article

  • Tellago announces SQL Server 2008 R2 BI quick adoption programs

    - by gsusx
    During the last year, we (Tellago) have been involved in various business intelligence initiatives that leverage some emerging BI techniques such as self-service BI or complex event processing (CEP). Specifically, in the last few months, we have partnered with Microsoft to deliver a series of events across the country where we present the different technologies of the SQL Server 2008 R2 BI stack such as PowerPivot, StreamInsight, Ad-Hoc Reporting and Master Data Services. As part of those events...(read more)

    Read the article

  • Transfer DNS zones from master to slave (MS DNS to BIND9)

    - by Bryan
    Hello, I have a problem with DNS servers. My master dns server runs on Microsoft DNS server and now I want to start slave DNS server on Linux Bind9. The problems is that master MS DNS server can't validate slave DNS server (bind9) and can't resolve FQDN. Maybe, I missed something... firewall, dns configuration and network looks like ok. And the second question is: How I can make full transfer of dns zones to slave dns server? from MS DNS to BIND9 Thanks in advance. Regards, Bryan

    Read the article

  • Php.ini: Local Value vs Master Value (safe_mode, specifically)

    - by Philipp Lenssen
    I can change php.ini values on my Apache and restart to see them in effect via a script showing php_info(). However, one setting is causing problems: safe_mode. I set it to "off" in php.ini but php_info() still shows it as Local value: On Master value: Off How can I find out which local value is overriding the master value? There's no htaccess directive of that kind in the httpdocs folder in question... (I already downloaded all files php_info() claims to be additional .ini files parsed, but safe_mode is not set in them.)

    Read the article

  • MySQL slave replication Seconds behind master increasing?

    - by geekmenot
    I started a MySQL slave using innobackupexand the Read_Master_Log_Pos: and the Relay_Log_Pos: are updating however the seconds behind master keeps on increasing (it's at Seconds_Behind_Master:496637 currently and increasing). Any ideas on how to fix this? mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.8.25.111 Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.005021 Read_Master_Log_Pos: 279162266 Relay_Log_File: mysql-relay-bin.000004 Relay_Log_Pos: 378939436 Relay_Master_Log_File: mysql-bin.004997 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 378939290 Relay_Log_Space: 26048998487 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 497714 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 1 row in set (0.00 sec)

    Read the article

  • Puppet Agent still able to connect to Master after certificate revocation

    - by chris
    In summary: Client connects for the first time and requests cert; on the Master, puppetca -s client is executed; Client gets the cert and completes the run successfully. Fine. But now: on the Master, puppetca -c client is executed and client's cert is not in the cert list anymore; Client connects again and can perform the run as usual; Restarting puppetmasterd doesn't solve the issue. How can I prevent client to connect once its cert has been revoked? Thanks in advance

    Read the article

  • Puppet nodes cant' find master, ec2 public versus internal ip addresses and hosts files

    - by Blankman
    If I setup my hosts files such that they reference all other ec2 nodes using the internal ip addresses, will this work or do I have to use the external ip addresses? Do I need to specify anything in my security group to get internal ip addresses to work? e.g. /etc/hosts ip-10-11-12-13.internal some_node_name If I do this, can I reference some_node_name anywhere in my scripts where I would have used the ip address previously? On my puppet agent servers, I have a reference to my puppet master like: public-ip-here puppet When I reboot my puppet agent's, syslog shows they couldn't find the master with the message: getaddinfo : name or service not known I did get it to work by updating /etc/default/puppet and I added to the options: --server=public-ip-here From what I read, puppet will by default try using 'puppet', and I set this in my hosts file so why wouldn't it be picking this up?

    Read the article

  • Firefox Master Password (ssh-agent)

    - by BCable
    I use the master password feature of Firefox, and I also use SSH keys to login to a bunch of UNIX machines. For SSH, there is a very useful application called ssh-agent that runs in the background knowing the required information about unlocking the key so you don't have to type the question every single time you want to connect. I open and close Firefox a lot, so I was curious, is there a way to have Firefox run in the background (preferrably doing nothing, but the whole process would be fine I guess as well) so that I don't have to type my master password every single time I open Firefox? Thanks!

    Read the article

  • Will unbinding a server to an Open Directory Master remove its own file shares

    - by scape
    According to this article: http://support.apple.com/kb/TS3180?viewlocale=en_US I am required to remove the ldap binding of my second Mac OS X Lion server before I set it up as a replica server. I initially set the server up as a replica, or so I thought, and created file shares (it refers to the first server's ACL) before I realized it was never promoted as a replica server. So as of now it's running and shares files with correct ACL permissions but if the Master goes down all the file shares seize up. I want to set it up as a replica so this is not an issue; however, I don't want to lose the file shares and their permissions as I remove the binding and restart the server-- apparently I must remove the ldap binding to the OD Master (also a Mac OS X Lion server) before setting it up as a replica.

    Read the article

  • How to check if redis master is OK?

    - by e-satis
    On the documentation, they advice the monitor command. But it has a 50% performance penalty for the whole system, and how should I do that ? Whatching the ouput using SSH until I don't see anything ? Let's say I have 3 servers: 1 with a redis master, 1 with a redis slave, and one with my website querying the redis master. How can I, from my website server, make cleany the decision to fallback to the slave by sending the SLAVEOF NO ONE command ? My first step would be to put some kind of timeout check with a simple ping, just to be sure the server is online. But for redis specifically, I have no clue.

    Read the article

  • puppet agent doesn't retrieve files from master

    - by nicmon
    I have a very basic question regarding to Puppet 3.0.1 configuration. I setup a puppet master server (CentOS) with 2 agents (CentOS and Windows 7), all 3 can ping and access each other. There is no error at all. I have copied a file under /etc/puppet/files/test2.txt my site.pp (/etc/puppet/manifests) contains these lines: node default { include test file { "/tmp/testmaster.txt": owner => root, group => root, mode => 644, source => "puppet:///files/test2.txt" } } but there will no file be created on agent servers under /tmp/ once I run "puppet agent --test" here is the output: [root@agent1 ~]# puppet agent --test Info: Retrieving plugin Info: Caching catalog for agent1.mydomain.com Info: Applying configuration version '1354267916' Finished catalog run in 0.02 seconds "puppet apply /etc/puppet/manifests/site.pp" creates the testmaster.txt under /tmp/ on master.

    Read the article

  • BIND no longer responds to AXFR Requests

    - by djsumdog
    Recently we moved our primary external DNS server. It has three caching DNS slaves in front of it provided by our ISP. They've told us they've started getting access denied requests when doing zone transfers (AXFR). If I add in my own IPs to the allow-transfer list, I also get a transfer failed when using dig with the AXFR argument. Here is what my bind configuration looks like: options { directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; zone-statistics yes; statistics-file "/var/log/named.stats"; listen-on-v6 { any; }; notify-source 10.19.0.68 port 53; querylog yes; notify yes; allow-transfer { 127.0.0.1; //localhost 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; also-notify { 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; include "/etc/named.d/forwarders.conf"; }; logging { channel simple_log { file "/var/log/bind.log" versions 10 size 3m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; channel log_zone_transfers { file "/var/log/axfr.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category xfer-out { log_zone_transfers; }; channel log_notify { file "/var/log/notify.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category notify { log_notify; }; channel queries { file "/var/log/queries.log" versions 10 size 30m; print-time yes; severity info; print-category yes; print-severity yes; }; category queries { queries; }; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; include "/etc/named.conf.include"; zone "example.net " { type master; file "/var/lib/named/master/example.net.hosts"; }; zone "example.com " { type master; file "/var/lib/named/master/example.com.hosts"; }; ## -- other master files -- And the errors in the xfer log look like the following: 29-Oct-2012 14:20:02.806 xfer-out: info: client 1.1.1.1#59069: bad zone transfer request: 'example.com./IN': non-authoritative zone (NOTAUTH) I've tried adding allow-transfer parameters directly on the zone files and still get failed transfers. Any idea what I'm doing wrong?

    Read the article

  • How do I make Master/Detail subreports in ReportBuilder come out right?

    - by Mason Wheeler
    I've got a report in ReportBuilder that's supposed to report on two objects. I didn't create this report, and I can't ask the person who did about how it works. Before running the report, we have some code that goes through, finds all the properties on the objects, and loads them into a memory dataset that looks like this: OBJECT_ID: TStringField PROP_NAME: TStringField PROP_VALUE: TStringField The report engine then creates a line on the report for each property in this dataset. This is implemented in a sub-report, whose parent only contains an OBJECT_ID, which is a human-readable name. Everything was going great until we had to display a "comment" of arbitrary size in the report. I made a second sub-report with a TMemoField so it could hold the text, and set the report up in the report designer. What I expect when I run the report is something that looks like this: HEADER Object 1 properties Object 1 comment Object 2 properties Object 2 comment I've managed to get just about everything but that. I used the MasterDataPipeline and MasterFieldLinks properties of the sub-report's pipelines to try to link the OBJECT_IDs of the sub-reports to the OBJECT_ID of the header, and that's the closest I've managed to come, but now what I see is: HEADER Object 1 properties Object 1 comment Object 2 comment The "Object 2 properties" section is nowhere to be seen, even though I've manually verified that the data is making it into the dataset correctly. This is driving me nuts. Any ReportBuilder gurus out there know what's going on and now to fix it?

    Read the article

  • MVC Page not showing up, 404 not found

    - by mwright
    I have a very simple MVC site that is returning a 404 not found error when trying to load a page at the very beginning. I'm looking for some direction to troubleshoot this problem since there is really nothing to go on from the error message. The error I'm getting is: Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /Views/Other/Index.aspx Below I have included the code for the various pieces, routing rules are default: routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional} // Parameter defaults ); The site is using nested MasterPages, not sure if this is involved with the problem but trying to include as much detail as possible. I have: Controllers OtherController Views: Shared Folder: Site.Master Other Folder: Other.Master Index.aspx Site.Master Code: <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> </head> <body> <div> <asp:ContentPlaceHolder ID="MainContent" runat="server"> </asp:ContentPlaceHolder> </div> </body> </html> Other.Master Code: <%@ Master Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewMasterPage" %> <asp:Content ID="OtherTitle" ContentPlaceHolderID="TitleContent" runat="server"> OTHER PAGE - MASTER TITLE <asp:ContentPlaceHolder ID="OtherPageTitle" runat="server"> </asp:ContentPlaceHolder> </asp:Content> <asp:Content ID="OtherContent" ContentPlaceHolderID="MainContent" runat="server"> Some other content. <asp:ContentPlaceHolder ID="PageContent" runat="server"> </asp:ContentPlaceHolder> </asp:Content> Index.aspx Code: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Other/Other.Master" Inherits="System.Web.Mvc.ViewPage" %> <asp:Content ID="IndexTitle" ContentPlaceHolderID="OtherTitle" runat="server"> Home </asp:Content> <asp:Content ID="IndexContent" ContentPlaceHolderID="OtherContent" runat="server"> Index content </asp:Content> OtherController Code namespace MVCProject.Controllers { public class OtherController : Controller { // // GET: /Member/ public ActionResult Index() { // Have also tried: // return View("Index", "Other.Master"); return View(); } } }

    Read the article

  • User Control not loading based on location

    - by mwright
    I have an ASP.net MVC solution that uses nested master pages to load content. On the first Master page I load a header, then have the Content Placeholder, and then load a footer. This master page is referenced by another master page which adds some additional information based on the user being logged in or not. When I load a page that references these master pages, the header loads, but the footer does not. If I move the footer up above the Content Place Holder it loads into the page. Any ideas why this might be the case? The code for the master page that contains the footer is as follows: <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> </head> <body> <div class="header"> <% Html.RenderPartial("Header"); %> </div> <div> <asp:ContentPlaceHolder ID="MainContent" runat="server"> </asp:ContentPlaceHolder> </div> <div class="footer"> <% Html.RenderPartial("Footer"); %> </div> </body> </html>

    Read the article

  • Git. Remote HEAD is ambiguous.

    - by Siegfried
    I checked the relevant thread but still can't solve this problem. When I typed "git remote show origin", I got * remote origin Fetch URL: xxxx Push URL: xxxx HEAD branch (remote HEAD is ambiguous, may be one of the following): development master Remote branches: development tracked master tracked Local branches configured for 'git pull': development merges with remote development master merges with remote master Local ref configured for 'git push': master pushes to master (up to date) I also checked "git show-ref", and I got: 3f8f4292e31cb8fa5938dbdd406b2f357764205b refs/heads/development 3f8f4292e31cb8fa5938dbdd406b2f357764205b refs/heads/master 3f8f4292e31cb8fa5938dbdd406b2f357764205b refs/remotes/origin/development 3f8f4292e31cb8fa5938dbdd406b2f357764205b refs/remotes/origin/master Here is the list of all branches I have by executing "git branch -a" development * master remotes/origin/development remotes/origin/master And this is what is in the .git/config: [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly autocrlf = false [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = xxxx push = refs/heads/master:refs/heads/master [branch "master"] remote = origin merge = refs/heads/master [branch "development"] remote = origin merge = refs/heads/development I and it seems that the remote development and master branch share the same node. How to solve this ambiguity problem? Thank you!

    Read the article

  • what port should I open for mysql master-master replication?

    - by Vanddel
    I have two servers running php5-fpm and a load balancer running nginx, the three servers share /var/www/drupal using nfs. nfs is working correctly. I replicated the two servers' database using mysql master master replication. everything was working fine till I added my iptables rules. In my iptables script, I first drop all chains then I accept the ones I want, other than that there are no other drop statements. I opened port 3306 for mysql replication like this : (the rule is on both servers ) iptables -A INPUT -p tcp -s $ip_Of_Other_Server --dport 3306 -j ACCEPT iptables -A OUTPUT -p tcp -d $ip_Of_Other_Server --sport 3306 -j ACCEPT The problem is, when I run both servers and I try to log in using my account on drupal it doesn't log in although I find a successful log in attempt in drupal logs. When I run only one server of them I can log in normally. when I allow everything in my iptables rules it works normally. I believe there's some port I need to open using iptables for the replication to work correctly but I can't find which one to open.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >