Search Results

Search found 11321 results on 453 pages for 'shared libraries'.

Page 429/453 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • Trouble recovering MySQL InnoDB database after server crash

    - by Andy Shinn
    I had a server crash due to a broken iSCSI link (the filesystem went into read-only mode). After repairing the link and rebooting the machine (CentOS 5 / MySQL 5.1), the MySQL server would not start and gave the following error: 100603 19:11:46 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100603 19:11:46 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Warning: database page corruption or a failed InnoDB: file read of page 112541. InnoDB: Trying to recover it from the doublewrite buffer. InnoDB: Dump of the page: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): lots of binary data 100603 19:11:46 InnoDB: Page checksum 953720272, prior-to-4.0.14-form checksum 2641912043 InnoDB: stored checksum 617821918, prior-to-4.0.14-form stored checksum 2080617765 InnoDB: Page lsn 115 2632899642, low 4 bytes of lsn at page end 2641594600 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Dump of corresponding page in doublewrite buffer: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): more binary data 100603 19:11:46 InnoDB: Page checksum 908374788, prior-to-4.0.14-form checksum 824841363 InnoDB: stored checksum 912869634, prior-to-4.0.14-form stored checksum 2210927931 InnoDB: Page lsn 115 2635312169, low 4 bytes of lsn at page end 2633173354 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Also the page in the doublewrite buffer is corrupt. InnoDB: Cannot continue operation. InnoDB: You can try to recover the database with the my.cnf InnoDB: option: InnoDB: innodb_force_recovery=6 100603 19:11:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended Per the error message, I have tried setting set-variable=innodb_force_recovery=6 in the my.cnf to get to the data. This allows the MySQL server to start. But when I try to do a mysqldump of the database or a SELECT * INTO OUTFILE "filename" FROM broken_table; it seems to endlessly just export the same line over and over again. I have also tried http://code.google.com/p/innodb-tools/. But this tool fails with an error that 'blob' type is not supported. If I try to access the data using the PHP application it crashes MySQL: `100603 19:19:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=2 max_threads=151 threads_connected=2 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338317 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x15d33f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x453aff00 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x24) [0x874364] /usr/libexec/mysqld(handle_segfault+0x346) [0x5c9166] /lib64/libpthread.so.0 [0x3a6e40eb10] /usr/libexec/mysqld(rec_get_offsets_func+0x30) [0x7cc310] /usr/libexec/mysqld [0x7674d8] /usr/libexec/mysqld(btr_search_info_update_slow+0x638) [0x768d48] /usr/libexec/mysqld(btr_cur_search_to_nth_level+0xc7d) [0x75f86d] /usr/libexec/mysqld [0x7dd1c1] /usr/libexec/mysqld(row_search_for_mysql+0x18b0) [0x7e03d0] /usr/libexec/mysqld(ha_innobase::general_fetch(unsigned char*, unsigned int, unsigned int)+0x7c) [0x7526fc] /usr/libexec/mysqld(handler::read_multi_range_next(st_key_multi_range**)+0x29) [0x6aed09] /usr/libexec/mysqld(QUICK_RANGE_SELECT::get_next()+0x194) [0x69a964] /usr/libexec/mysqld [0x6aafe9] /usr/libexec/mysqld(sub_select(JOIN*, st_join_table*, bool)+0x56) [0x635196] /usr/libexec/mysqld [0x63f9cd] /usr/libexec/mysqld(JOIN::exec()+0x950) [0x6497c0] /usr/libexec/mysqld(mysql_select(THD*, Item*, TABLE_LIST*, unsigned int, List&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsign ed long long, select_result*, st_select_lex_unit*, st_select_lex*)+0x17b) [0x64b34b] /usr/libexec/mysqld(handle_select(THD*, st_lex*, select_result*, unsigned long)+0x169) [0x64bc79] /usr/libexec/mysqld [0x5d34b6] /usr/libexec/mysqld(mysql_execute_command(THD*)+0x4e5) [0x5d6b45] /usr/libexec/mysqld(mysql_parse(THD*, char const*, unsigned int, char const)+0x211) [0x5dc321] /usr/libexec/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x10b8) [0x5dd3f8] /usr/libexec/mysqld(do_command(THD*)+0xe6) [0x5dd9e6] /usr/libexec/mysqld(handle_one_connection+0x73d) [0x5d036d] /lib64/libpthread.so.0 [0x3a6e40673d] /lib64/libc.so.6(clone+0x6d) [0x3a6d4d3d1d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at 0x15de5e0 = SELECT DISTINCT count(DISTINCT i.itemid) as rowscount,i.hostid FROM items i WHERE ((i.itemid BETWEEN 000000000000000 AND 0999999 99999999)) AND i.type<9 AND (i.hostid IN (10017,10047,10050,10054,10056,10059,10062,10063,10064,10065,10066,10067,10068,10069,10070,10071,10072,10073,100 74,10075,10076,10077,10078,10079,10080,10081,10082,10084,10088,10089,10090,10091,10092,10093,10094,10095,10096,10097,10098,10099,10100,10101,10102,10103,10 104,10105,10106,10107,10108,10109)) GROUP BY i.hostid thd-thread_id=3 thd-killed=NOT_KILLED The manual page at contains information that should help you find out what is causing the crash. 100603 19:19:19 mysqld_safe Number of processes running now: 0 100603 19:19:19 mysqld_safe mysqld restarted` Before recovering form an older backup as a last resort I am looking for anymore suggestions. Thanks!

    Read the article

  • Linux networking crash: best steps to find out the cause?

    - by Aron Rotteveel
    One of our Linux (CentOS) servers was unreachable last night. The server was not reachable in any way except for the remote console. After logging in with the remote console, it turned out I could not ping any outside hosts either. A simple service network restart solved the issue, but I am still wondering what could have caused this. My log files seem to indicate no error at all (except for the various daemons that need a network connection and failed after the network failure). Are there any additional steps I can take to find out the cause of this problem? EDIT: this just happened again. The server was completely unresponsive until I issued a networking service restart. Any advise is welcome. Could this be caused by a faulty hardware component? As per Madhatters request, here are some excerpts from the log at the time (the network crashed at 20:13): /var/log/messages: Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=100 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:13:34 graviton junglediskserver: Connection to gateway failed: xGatewayTransport - Connection to gateway failed. The first three messages are simple responses to iptables rules I have set up through the LFD firewall. The last message indicates that JungleDisk, which I use for backups can no longer connect to the gateway. Apart from this, there are no interesting messages around this time. EDIT 4 dec: as per Mattdm's request, here is the output of ethtool eth0: (Please not that these are the settings that currently work. If things go wrong again, I will be sure to post this again if necessary. Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: g Wake-on: d Link detected: yes As per Joris' request, here is also the output of route -n: aron@graviton [~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.xx.xx.58 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.42 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.43 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.41 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.46 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.47 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.44 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.45 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.50 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.51 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.48 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.49 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.54 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.52 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.53 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 xx.xx.xx.62 0.0.0.0 UG 0 0 0 eth0 The bottom xx.62 is my gateway. EDIT december 28th: the problem occurred again and I got the chance to compare some of the outputs of the above tests. What I found out is that arp -an returns an incomplete MAC address for my gateway (which is not under my control; the server is in a shared rack): During failure: ? (xx.xx.xx.62) at <incomplete> on eth0 After service network restart: ? (xx.xx.xx.62) at 00:00:0C:9F:F0:30 [ether] on eth0 Is this something I can fix or is it time for me to contact the data centre?

    Read the article

  • Asp.net mvc, view with multiple updatable parts - how?

    - by DerDres
    I have started doing asp.net mvc programming and like it more everyday. Most of the examples I have seen use separate views for viewing and editing details of a specific entity. E.g. - table of music albums linking to separate 'detail' and 'update' views [Action] | Title | Artist detail, update | Uuuh Baby | Barry White detail, update | Mr Mojo | Barry White With mvc how can I achieve a design where the R and the U (CRUD) are represented in a single view, and furthermore where the user can edit separate parts of the view, thus limiting the amount of data the user can edit before saving? Example mockup - editing album detials: I have achieved such a design with ajax calls, but Im curious how to do this without ajax. Parts of my own take on this can be seen below. I use a flag (enum EditCode) indicating which part of the view, if any, that has to render a form. Is such a design in accordance with the framework, could it be done more elegantly? AlbumController public class AlbumController : Controller { public ActionResult Index() { var albumDetails = from ManageVM in state.AlbumState.ToList() select ManageVM.Value.Detail; return View(albumDetails); } public ActionResult Manage(int albumId, EditCode editCode) { (state.AlbumState[albumId] as ManageVM).EditCode = (EditCode)editCode; ViewData["albumId"] = albumId; return View(state.AlbumState[albumId]); } [HttpGet] public ActionResult Edit(int albumId, EditCode editCode) { return RedirectToAction("Manage", new { albumId = albumId, editCode = editCode }); } // edit album details [HttpPost] public ActionResult EditDetail(int albumId, Detail details) { (state.AlbumState[albumId] as ManageVM).Detail = details; return RedirectToAction("Manage", new { albumId = albumId, editCode = EditCode.NoEdit });// zero being standard } // edit album thought [HttpPost] public ActionResult EditThoughts(int albumId, List<Thought> thoughts) { (state.AlbumState[albumId] as ManageVM).Thoughts = thoughts; return RedirectToAction("Manage", new { albumId = albumId, editCode = EditCode.NoEdit });// zero being standard } Flag - EditCode public enum EditCode { NoEdit, Details, Genres, Thoughts } Mangae view <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcApplication1.Controllers.ManageVM>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Manage </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Manage</h2> <% if(Model.EditCode == MvcApplication1.Controllers.EditCode.Details) {%> <% Html.RenderPartial("_EditDetails", Model.Detail); %> <% }else{%> <% Html.RenderPartial("_ShowDetails", Model.Detail); %> <% } %> <hr /> <% if(Model.EditCode == MvcApplication1.Controllers.EditCode.Thoughts) {%> <% Html.RenderPartial("_EditThoughts", Model.Thoughts); %> <% }else{%> <% Html.RenderPartial("_ShowThoughts", Model.Thoughts); %> <% } %>

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Django ImageField issue with JPEG's

    - by Kieran Lynn
    I am having a major issue with PIL (Python Image Library) in Django and have jumpped through a lot of hoops and have thus far not been able to figure out what the root of the issue is. The problem essentially breaks down to not being able to upload JPEG images through the ImageField in the Django admin. But the issue is not as simple as installing libjpeg. First, I installed PIL (through Buildout) and realized once it was installed that I had not installed libjpeg because JPEG support was not available. Having not setup the server myself, I just assumed that it was not installed and I compiled libjpeg 8 from the source. This ended up in my /usr/local/lib/ directory. I cleared out my Buildout files and rebuilt everything. This time when PIL compiled I had JPEG support. But I went to the Django Admin and tried to upload a JPEG though an ImageField with no luck. I got the "Upload a valid image. The file you uploaded was either not an image or a corrupted image" error. Just as a test I opened up a the Djano shell and ran the following: > import Image > i = Image.open( "/absolute_path/file.jpg" ) > print i <JpegImagePlugin.JpegImageFile image mode=RGB size=940x375 at 0x7F908C529BD8> This runs with no errors and shows that PIL is able to open JPEG's. After doing some reading, I come across this thread: Is it possible to control which libraries apache uses? Looks like PHP also uses libjpeg and is loading before Django, and therefor loading libjpeg 6.2 before. This is show when using lsof: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME apache2 2561 www-data mem REG 202,1 146032 639276 /usr/lib/libjpeg.so.62.0.0 So my thought is that I should be using libjpeg 6.2. So I removed libjpeg located in my /usr/local/lib directory. After rereading the PIL installation instructions, I realized that I might not have the dev/header files for libjpeg that PIL needs. So I also uninstalled libjpeg using the aptitude uninstaller (sudo aptitude remove libjpeg62). Then to ensure that I got the header files that PIL needed I installed libjpeg using aptitude: (sudo aptget install libjpeg62-dev). From here I cleaned out my Buildout directory, and reran Buildout, which in turn reinstalled PIL. Once again, I have JPEG support, now using the libjpeg62. So I go to test in the Django Admin. Still no JPEG support. So I wanted to test JPEG support in general and see if the exception was not handled, what kind of error it would throw. So in my homepage view I added the following code to open a JPEG image: import Image i = Image.open( "/absolute_path/file.jpg" ) v = i.verify() Then I pass i to the HTML view just to easily see the output. I deploy these changes to the server and restart. I am surprised not to see an error and get the following output: {{ i }} - <JpegImagePlugin.JpegImageFile image mode=RGB size=940x375 at 0x7F908C529BD8> {{ v }} - None So at this point I am really confused: Why can I successfully open a JPEG while the admin cannot? Am I missing something, is this not an issue with libjpeg? If not an issue with libjpeg, why can I upload a PNG with no issues? Any help would be much appreciated, I have been on this for 2 days debugging with no luck. Setup: 1. Rackspace Cloud Server 2. Ubuntu 10.04 3. Django 1.2.3 (Installed though Buildout) 4. PIL 1.1.7 (Installed though Buildout) 5. libjpeg 6.2 (installed through aptitude (sudo aptget install libjpeg62-dev)

    Read the article

  • Can't log in via SSH to any accounts set to use /bin/bash as a default shell

    - by Gui Ambros
    I'm trying to install bash as the default shell on a ARM Linux running on an embedded device (Synology DS212+ NAS). But there's something really wrong, and I can't figure out what it is. Symptoms: 1) Root has /bin/bash as default shell, and can log in normally via SSH: $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash $ ssh root@NAS root@NAS's password: Last login: Sun Dec 16 14:06:56 2012 from desktop # 2) joeuser has /bin/bash as default shell, and receives "Permission denied" when trying to log in via SSH: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/bash $ ssh joeuser@localhost joeuser@NAS's password: Last login: Sun Dec 16 14:07:22 2012 from desktop Permission denied, please try again. Connection to localhost closed. 3) changing joeuser's shell back to /bin/sh: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/sh $ ssh joeuser@localhost Last login: Sun Dec 16 15:50:52 2012 from localhost $ To make things even more strange, I can log in as joeuser using /bin/bash using the serial console (!). Also a su - joeuser as root works fine, so the bash binary itself is working fine. In an act of despair, I changed joeuser's uid to 0 on /etc/passwd, but also didn't work, so it doesn't seem to be anything permission related. Seems that bash is doing some extra checking that sshd didn't like, and blocking the connections for non-root users. Maybe some sort of sanity checking - or terminal emulation - that is triggering the SIGCHLD, but only when called via ssh. I already went through every single item on sshd_config, and also put SSHD in debug mode, but didn't find anything strange. Here's my /etc/ssh/sshd_config: LogLevel DEBUG LoginGraceTime 2m PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys ChallengeResponseAuthentication no UsePAM yes AllowTcpForwarding no ChrootDirectory none Subsystem sftp internal-sftp -f DAEMON -u 000 And here's the output from /usr/syno/sbin/sshd -d, showing the failed attempt of joeuser trying to log in, with /bin/bash as the shell: debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: HPN Buffer Size: 87380 debug1: sshd version OpenSSH_5.8p1-hpn13v11 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: read PEM private key done: type ECDSA debug1: private host key: #2 type 3 ECDSA debug1: rexec_argv[0]='/usr/syno/sbin/sshd' debug1: rexec_argv[1]='-d' Set /proc/self/oom_adj from 0 to -17 debug1: Bind to port 22 on ::. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on 0.0.0.0 port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 6 out 6 newsock 6 pipe -1 sock 9 debug1: inetd sockets after dupping: 4, 4 Connection from 127.0.0.1 port 52212 debug1: HPN Disabled: 0, HPN Buffer Size: 87380 debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1-hpn13v11 SSH: Server;Ltype: Version;Remote: 127.0.0.1-52212;Protocol: 2.0;Client: OpenSSH_5.8p1-hpn13v11 debug1: match: OpenSSH_5.8p1-hpn13v11 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1-hpn13v11 debug1: permanently_set_uid: 1024/100 debug1: MYFLAG IS 1 debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: AUTH STATE IS 0 debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: client->server aes128-ctr hmac-md5 none SSH: Server;Ltype: Kex;Remote: 127.0.0.1-52212;Enc: aes128-ctr;MAC: hmac-md5;Comp: none debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: server->client aes128-ctr hmac-md5 none debug1: expecting SSH2_MSG_KEX_ECDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user joeuser service ssh-connection method none SSH: Server;Ltype: Authname;Remote: 127.0.0.1-52212;Name: joeuser debug1: attempt 0 failures 0 debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: PAM: initializing for "joeuser" debug1: PAM: setting PAM_RHOST to "localhost" debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user joeuser service ssh-connection method password debug1: attempt 1 failures 0 debug1: do_pam_account: called Accepted password for joeuser from 127.0.0.1 port 52212 ssh2 debug1: monitor_child_preauth: joeuser has been authenticated by privileged process debug1: PAM: establishing credentials User child is on pid 9129 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 65536 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_new: session 0 debug1: session_pty_req: session 0 alloc /dev/pts/1 debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 9130 debug1: session_exit_message: session 0 channel 0 pid 9130 debug1: session_exit_message: release channel 0 debug1: session_by_tty: session 0 tty /dev/pts/1 debug1: session_pty_cleanup: session 0 release /dev/pts/1 Received disconnect from 127.0.0.1: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials Here you have the full output of sshd -dd, together with ssh -vv. Bash: # bash --version GNU bash, version 3.2.49(1)-release (arm-none-linux-gnueabi) Copyright (C) 2007 Free Software Foundation, Inc. The bash binary was cross compiled from source. I also tried using a pre-compiled binary from the Optware distribution, but had the exact same problem. I checked for missing shared libraries using objdump -x, but they're all there. Any ideas what could be causing this "Permission denied, please try again."? I'm almost diving in the bash source code to investigate, but trying to avoid hours chasing something that may be silly.

    Read the article

  • PHP framework question

    - by iconiK
    I'm currently working on a browser-based MMO and have chosen the LAMP stack because of the extremely low cost to start with in production (versus Windows + IIS + ASP.NET/C# + SQL Server, even though I have MSDN Universal). However I will need a PHP framework for this as it's no easy task. I am not restricted by anything other than the ability to run on Linux, as I will use a dedicated cloud hosting solution (and a VMWare image for development) and can configure it as needed. In no specific order: It has to be easily scalable; this is crucial. If the game becomes a steady success it will eventually outgrow the server beyond what the host provides and would have to be moved to several load-balanced servers. It is crucial that this can be done with minimum effort. I do know this might require following strict conventions, so if you know of any for your suggested framework please explain what would be needed. It has to provide modules for all the core tasks: authentication, ACL, database access, MVC, and so on. One or two missing modules are fine, as long as they can easily be written and integrated. It should support internationalization. I think there is no excuse for any web framework not to provide means of translating the application and switching between languages without a lot of effort from the programmer. Must have very good community support and preferably commercial support as well. Yes, I do know QCodo/QCubed is so nice, but it is not mature enough for this task. Smooth AJAX support is required. Whether the framework comes with AJAX-capable widgets or has an easy way of adding AJAX is not relevant, as long as AJAX is easily doable. I plan to use jQuery + Dojo or one of them alone - not exactly sure. Auto-magically doing stuff when it improves readability and relieves a lot of effort would be especially nice if it is generally reliable and does not interfere with other requirements. This seems to be the case of CakePHP. I have read a lot of comparisons and I know it's a really hot debate. The general answer is "try and see for yourself what suits you". However, I can't say it is easy for this task and I'm calling for your experience with building applications with similar requirements. So far I'm tied up between Zend and CakePHP by the general criteria, however, all well-known frameworks offer the same functionality in some way or another with different approaches each with it's own advantages and disadvantages. Edits: I am kinda new to MVC, however, I am willing to learn it and I don't care if a framework is easier for those new to MVC. I have lots of time to learn MVC and any other architectures (or whatever they're called) you recommend. I will use Zend as a utility "framework", even though it's just a collection of libraries (some good ones though, as I have been told). Current PHP contenders are: CakePHP, Kohana, Zend alone.

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Explain the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. //hello.S #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. $cpp hello.S hello.s //pre-processor $as hello.s -o hello.o //assemble $ld hello.o // linker : converting relocatable to executable Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Kindly change this program appropriately so that it can run correctly on this machine. EDIT: After reading Paul R's answer. I checked my files claws@claws-desktop:~$ file ./hello.o ./hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped claws@claws-desktop:~$ file ./hello ./hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped All of my questions still hold true. What exactly is happening in this case? Can someone please answer my questions and provide an x86-64 version of this code?

    Read the article

  • php crashes with no core file and this message : apc_mmap failed

    - by greg0ire
    Description of the problem Regularly, cron php processes crash on our production server, which result in mails with the following body : PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 Segmentation fault (core dumped) I think the Segmentation fault (core dumped) should result in core files being handled by apport and then written in /var/crashes, but the files I can see there are there since yesterday, although the last crash occured today : -rw-r----- 1 root whoopsie 1138528 mai 22 04:09 _usr_bin_php5.0.crash -rw-r----- 1 frontoffice whoopsie 1166373 mai 20 18:00 _usr_bin_php5.1005.crash -rw-r----- 1 frontoffice whoopsie 81622658 mai 22 00:05 _usr_sbin_php5-fpm.1005.crash I tried to download the last one anyway, and ran gdb /usr/sbin/php5-fpm /tmp/_usr_sbin_php5-fpm.1005.crash, only to be told that the file is not a core file (its format was not recognized). Here is the server's apc configuration : cat /etc/php5/cli/conf.d/20-apc.ini extension=apc.so apc.shm_size=512M apc.ttl=3600 apc.user_ttl=3600 apc.enable_cli=1 I'm mostly worried about the apc.shm_size… isn't it too high or too low ? I understand it has to do with the size of memory segments. Question(s) What could be the problem ? How can I troubleshoot it (how can I get a valid core file ?) ? System information free total used free shared buffers cached Mem: 5081296 4354684 726612 0 374744 959968 -/+ buffers/cache: 3019972 2061324 Swap: 522236 516888 5348 cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" php -v PHP 5.4.17-1~precise+1 (cli) (built: Jul 17 2013 18:14:06) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies php -i excerpt : Configuration apc APC Support => enabled Version => 3.1.13 APC Debugging => Disabled MMAP Support => Enabled MMAP File Mask => Locking type => pthread mutex Locks Serialization Support => php Revision => $Revision: 327136 $ Build Date => Nov 20 2012 18:41:36 Directive => Local Value => Master Value apc.cache_by_default => On => On apc.canonicalize => On => On apc.coredump_unmap => Off => Off apc.enable_cli => On => On apc.enabled => On => On apc.file_md5 => Off => Off apc.file_update_protection => 2 => 2 apc.filters => no value => no value apc.gc_ttl => 3600 => 3600 apc.include_once_override => Off => Off apc.lazy_classes => Off => Off apc.lazy_functions => Off => Off apc.max_file_size => 1M => 1M apc.mmap_file_mask => no value => no value apc.num_files_hint => 1000 => 1000 apc.preload_path => no value => no value apc.report_autofilter => Off => Off apc.rfc1867 => Off => Off apc.rfc1867_freq => 0 => 0 apc.rfc1867_name => APC_UPLOAD_PROGRESS => APC_UPLOAD_PROGRESS apc.rfc1867_prefix => upload_ => upload_ apc.rfc1867_ttl => 3600 => 3600 apc.serializer => default => default apc.shm_segments => 1 => 1 apc.shm_size => 512M => 512M apc.shm_strings_buffer => 4M => 4M apc.slam_defense => On => On apc.stat => On => On apc.stat_ctime => Off => Off apc.ttl => 3600 => 3600 apc.use_request_time => On => On apc.user_entries_hint => 4096 => 4096 apc.user_ttl => 3600 => 3600 apc.write_lock => On => On php -m [PHP Modules] apc bcmath bz2 calendar Core ctype curl date dba dom ereg exif fileinfo filter ftp gd gettext hash iconv imagick intl json ldap libxml mbstring memcache memcached mhash mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_pgsql pdo_sqlite pgsql Phar posix Reflection session shmop SimpleXML soap sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tidy tokenizer wddx xml xmlreader xmlwriter zip zlib [Zend Modules] ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 39531 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 39531 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Uncaught TypeError: Object #<an Object> has no method 'fullCalendar'

    - by Lalit
    Hi, I have embed the fullcalender control in my asp.net mvc application. It is running fine locally. but when I uploads it to my domain server (third party) it showing me This Error: Uncaught TypeError: Object # has no method 'fullCalendar' in crome console (debugger). and not rendering the control. ** EDITED: My HTML code is this ** <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" % Index <% var serializer = new System.Web.Script.Serialization.JavaScriptSerializer(); % < style type='text/css' body { margin-top: 40px; text-align: center; font-size: 14px; font-family: "Lucida Grande",Helvetica,Arial,Verdana,sans-serif; } #calendar { width: 900px; margin: 0 auto; } < script type="text/javascript" $(document).ready(function() { var date = new Date(); var d = date.getDate(); var m = date.getMonth(); var y = date.getFullYear(); var officerid = document.getElementById('officerid').value; url = "/TasksToOfficer/Calender/" + officerid; var currenteventIden = <%= serializer.Serialize( ViewData["iden"] ) %> var calendar = $('#calendar').fullCalendar({ header: { left: 'prev,next today', center: 'title', right: 'month,agendaWeek,agendaDay', border: 0 }, eventClick: function(event, element) { var title = prompt('Event Title:', event.title, { buttons: { Ok: true, Cancel: false} }); var iden = event.id; if (title) { var st = event.start; var ed = event.end; var aldy = event.allDay; var dt = event.date; event.title = title; calendar.fullCalendar('updateEvent',event); var date = new Date(st); var NextMonth = date.getMonth() + 1; var dateString = (date.getDate()) + '/' + NextMonth + '/' + date.getFullYear(); var QueryStringForEdit=null; QueryStringForEdit="officerid=" + officerid + "&description=" + title + "&date=" + dateString + "&IsForUpdate=true&iden=" + iden; if (officerid) { $.ajax( { type: "POST", url: "/TasksToOfficer/Create", data: QueryStringForEdit, success: function(result) { if (result.success) $("#feedback input").attr("value", ""); // clear all the input fields on success }, error: function(req, status, error) { } }); } } }, selectable: true, selectHelper: true, select: function(start, end, allDay) { var title = prompt('Event Title:', { buttons: { Ok: true, Cancel: false } } ); if (title) { calendar.fullCalendar('renderEvent', { title: title, start: start, end: end, allDay: allDay }, false); // This is false , because do not show same event on same date after render from server side. var date = new Date(start); var NextMonth = date.getMonth() + 1; // Reason: it is bacause of month array it starts from 0 var dateString = (date.getDate()) + '/' + NextMonth + '/' + date.getFullYear(); if (officerid) { $.ajax({ type: "POST", url: "/TasksToOfficer/Create", data: "officerid=" + officerid + "&description=" + title + "&date=" + dateString + "&IsForUpdate=false", success: function(result) { if (result.success) $("#feedback input").attr("value", ""); //$("#feedback_status").slideDown(250).text(result.message); }, error: function(req, status, error) { } }); } } calendar.fullCalendar('unselect'); }, editable: true, events: url }); }); //--------------------------------------------------------------------------// </script > <h2> Index</h2> <div id="calendar"> </div> <input id="officerid" type="hidden" value="<%=ViewData["officerid"].ToString()%>" />

    Read the article

  • "interface not found" in WCF Moniker without registration for excel

    - by tbischel
    I'm trying to connect excel to a WCF service, but I can't seem to get even a trivial case to work... I get an Invalid Syntax error when I try and create the proxy in excel. I've attached the visual studio debugger to excel, and get that the real error is "interface not found". I know the service works because the test client created by visual studio is ok... so the problem is in the VBA moniker string. I'm hoping to find one of two things: 1) a correction to my moniker string that will make this work, or 2) an existing sample project to download that has the source for both the host and client that does work. Here is the code for my VBA client: Dim addr As String addr = "service:mexAddress=net.tcp://localhost:7891/Test/WcfService1/Service1/mex, " addr = addr + "address=net.tcp://localhost:7891/Test/WcfService1/Service1/, " addr = addr + "contract=IService1, contractNamespace=http://tempuri.org, " addr = addr + "binding=NetTcpBinding_IService1, bindingNamespace=""http://tempuri.org""" MsgBox (addr) Dim service1 As Object Set service1 = GetObject(addr) MsgBox service1.Test(12) I have the following service: [ServiceContract] public interface IService1 { [OperationContract] string GetData(int value); } public class Service1 : IService1 { public string GetData(int value) { return string.Format("You entered: {0}", value); } } It has the following config file: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service behaviorConfiguration="WcfService1.Service1Behavior" name="WcfService1.Service1"> <endpoint address="" binding="netTcpBinding" bindingConfiguration="" contract="WcfService1.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:7891/Test/WcfService1/Service1/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfService1.Service1Behavior"> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration>

    Read the article

  • Encode audio to aac with libavcodec

    - by ryan
    I'm using libavcodec (latest git as of 3/3/10) to encode raw pcm to aac (libfaac support enabled). I do this by calling avcodec_encode_audio repeatedly with codec_context-frame_size samples each time. The first four calls return successfully, but the fifth call never returns. When I use gdb to break, the stack is corrupt. If I use audacity to export the pcm data to a .wav file, then I can use command-line ffmpeg to convert to aac without any issues, so I'm sure it's something I'm doing wrong. I've written a small test program that duplicates my problem. It reads the test data from a file, which is available here: http://birdie.protoven.com/audio.pcm (~2 seconds of signed 16 bit LE pcm) I can make it all work if I use FAAC directly, but the code would be a little cleaner if I could just use libavcodec, as I'm also encoding video, and writing both to an mp4. ffmpeg version info: FFmpeg version git-c280040, Copyright (c) 2000-2010 the FFmpeg developers built on Mar 3 2010 15:40:46 with gcc 4.4.1 configuration: --enable-libfaac --enable-gpl --enable-nonfree --enable-version3 --enable-postproc --enable-pthreads --enable-debug=3 --enable-shared libavutil 50.10. 0 / 50.10. 0 libavcodec 52.55. 0 / 52.55. 0 libavformat 52.54. 0 / 52.54. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 Is there something I'm not setting, or setting incorrectly in my codec context, maybe? Any help is greatly appreciated! Here is my test code: #include <stdio.h> #include <libavcodec/avcodec.h> void EncodeTest(int sampleRate, int channels, int audioBitrate, uint8_t *audioData, size_t audioSize) { AVCodecContext *audioCodec; AVCodec *codec; uint8_t *buf; int bufSize, frameBytes; avcodec_register_all(); //Set up audio encoder codec = avcodec_find_encoder(CODEC_ID_AAC); if (codec == NULL) return; audioCodec = avcodec_alloc_context(); audioCodec->bit_rate = audioBitrate; audioCodec->sample_fmt = SAMPLE_FMT_S16; audioCodec->sample_rate = sampleRate; audioCodec->channels = channels; audioCodec->profile = FF_PROFILE_AAC_MAIN; audioCodec->time_base = (AVRational){1, sampleRate}; audioCodec->codec_type = CODEC_TYPE_AUDIO; if (avcodec_open(audioCodec, codec) < 0) return; bufSize = FF_MIN_BUFFER_SIZE * 10; buf = (uint8_t *)malloc(bufSize); if (buf == NULL) return; frameBytes = audioCodec->frame_size * audioCodec->channels * 2; while (audioSize >= frameBytes) { int packetSize; packetSize = avcodec_encode_audio(audioCodec, buf, bufSize, (short *)audioData); printf("encoder returned %d bytes of data\n", packetSize); audioData += frameBytes; audioSize -= frameBytes; } } int main() { FILE *stream = fopen("audio.pcm", "rb"); size_t size; uint8_t *buf; if (stream == NULL) { printf("Unable to open file\n"); return 1; } fseek(stream, 0, SEEK_END); size = ftell(stream); fseek(stream, 0, SEEK_SET); buf = (uint8_t *)malloc(size); fread(buf, sizeof(uint8_t), size, stream); fclose(stream); EncodeTest(32000, 2, 448000, buf, size); }

    Read the article

  • Postmortem debugging with WinDBG.

    - by Drazar
    I have an WCF-service running on an server, and occasionally(1-2 times every month) it throws an COMException with the informative message ”Unknown error (0x8005008)”. When i googled for this particular error I only got threads about problems when creating virtual directories in IIS. And the source code hasn’t anything with making a virtual directory in IIS. DirectoryServiceLib.LdapProvider.Directory - CreatePost - Could not create employee for 195001010000,000000000000: System.Runtime.InteropServices.COMException (0x80005008): Unknown error (0x80005008) at System.DirectoryServices.PropertyValueCollection.PopulateList I've taken a memorydump when I catch the Exception for further analysis in WinDBG. After switching to the right thread I executed the !CLRStack command: 000000001b8ab6d8 000000007708671a [NDirectMethodFrameStandalone: 000000001b8ab6d8] Common.MemoryDump.MiniDumpWriteDump(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab680 000007ff002808d8 DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab780 000007ff00280812 Common.MemoryDump.CreateMiniDump(System.String) 000000001b8ab7e0 000007ff0027b218 DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8ad6d8 000007fef8816869 [HelperMethodFrame: 000000001b8ad6d8] 000000001b8ad820 000007feec2b6c6f System.DirectoryServices.PropertyValueCollection.PopulateList() 000000001b8ad860 000007feec225f0f System.DirectoryServices.PropertyValueCollection..ctor(System.DirectoryServices.DirectoryEntry, System.String) 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) 000000001b8ad8f0 000007ff00274d34 Common.DirectoryEntryExtension.GetStringAttribute(System.String) 000000001b8ad940 000007ff0027f507 DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) 000000001b8ad980 000007ff0027a7cf DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adbe0 000007ff00279532 DirectoryServiceLib.WCFDirectory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adc60 000007ff001f47bd DynamicClass.SyncInvokeCreatePost(System.Object, System.Object[], System.Object[]) My conclusion is that it fails when the code is calling System.DirectoryServices.PropertyCollection.get_Item(System.String). So after issuing an !CLRStack -a I get this result: 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) PARAMETERS: this = <no data> propertyName = <no data> LOCALS: <CLR reg> = 0x0000000001dcef78 <no data> My very first question is why does it display no data on the propertyname? I am kinda new on Windbg. However I executed an dumpobject on = 0x0000000001dcef78: 0:013> !do 0x0000000001dcef78 Name: System.String MethodTable: 000007fef66d6960 EEClass: 000007fef625eec8 Size: 74(0x4a) bytes File: C:\Windows\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll String: personalprescriptioncode Fields: MT Field Offset Type VT Attr Value Name 000007fef66dc848 40000ed 8 System.Int32 1 instance 24 m_stringLength 000007fef66db388 40000ee c System.Char 1 instance 70 m_firstChar 000007fef66d6960 40000ef 10 System.String 0 shared static Empty >> Domain:Value 0000000000174e10:00000000019d1420 000000001a886f50:00000000019d1420 << So when the source code wants to fetch the personalprescriptioncode from Active Directory(what is used for persistence layer) it fails. Looking back at the stack it is when issuing the Copy method. DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) So looking in the sourcecode: DirectoryPost postInLimbo = DirectoryPostFactory.Instance().GetDirectoryPost(LdapConfigReader.Instance().GetConfigValue("LimboDN"), idGenPerson.ID.UserId); if (postInLimbo != null) newPost.Copy(postInLimbo); This code is looking for another post in OU=limbo with the same UserId and if it finds one it copies the attributes to the new post. In this case it does and it fails with personalprescriptioncode. I've looked in Active Directory under OU=Limbo and the post exist there with the attribute personalprescriptioncode=31243. Question 1: Why does it display no data for some of the PARAMETERS and LOCALS? Is it the GC who has cleaned up before the memorydump had been created. Question 2: Is there anymore i can do to get to the solution to this problem?

    Read the article

  • curl_init undefined?

    - by udaya
    Hi I am importing the contacts from gmail to my page ..... The process doesnot work due to this error 'curl_init' is not defined The suggestion i got is to 1.uncomment destination curl.dll 2.copy the following libraries to the windows/system32 dir. ssleay32.dll libeay32.dll 3.copy php_curl.dll to windows/system32 After trying all these i refreshed my xampp Even then error occurs This is my page where i am trying to import the gmail contacts ` // set URL and other appropriate options curl_setopt($ch, CURLOPT_URL, "http://www.example.com/"); curl_setopt($ch, CURLOPT_HEADER, 0); // grab URL and pass it to the browser curl_exec($ch); // close cURL resource, and free up system resources curl_close($ch); ? "HOSTED_OR_GOOGLE", "Email" = $_POST['Email'], echo "Passwd" = $_POST['Passwd'], "service" = "cp", "source" = "tutsmore/1.2" ); //Now we are going to post these datas to the clientLogin url. // Initialize the curl object with the $curl = curl_init($clientlogin_url); //Make the post true curl_setopt($curl, CURLOPT_POST, true); //Passing the above array of parameters. curl_setopt($curl, CURLOPT_POSTFIELDS, $clientlogin_post); //Set this for authentication and ssl communication. curl_setopt($curl, CURLOPT_HTTPAUTH, CURLAUTH_ANY); //provide false to not to check the server for the certificate. curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); //Tell curl to just don't echo the data but return it to a variable. curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); //The variable containing response $response = curl_exec($curl); //Check whether the user is successfully login using the preg_match and save the auth key if the user //is successfully logged in preg_match("/Auth=([a-z0-9_-]+)/i", $response, $matches); $auth = $matches[1]; // Include the Auth string in the headers $headers = array("Authorization: GoogleLogin auth=" . $auth); // Make the request to the google contacts feed with the auth key $curl = curl_init('http://www.google.com/m8/feeds/contacts/default/full?max-results=10000'); //passing the headers of auth key. curl_setopt($curl, CURLOPT_HTTPHEADER, $headers); //Return the result in a variable curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); //the variable with the response. $feed = curl_exec($curl); //Create empty array of contacts echo "contacts".$contacts=array(); //Initialize the DOMDocument object $doc=new DOMDocument(); //Check whether the feed is empty //If not empty then load that feed. if (!empty($feed)) $doc-loadHTML($feed); //Initialize the domxpath object and provide the loaded feed $xpath=new DOMXPath($doc); //Get every entry tags from the feed. $query="//entry"; $data=$xpath-query($query); //Process each entry tag foreach ($data as $node) { //children of each entry tag. $entry_nodes=$node-childNodes; //Create a temproray array. $tempArray=array(); //Process the child node of the entry tag. foreach($entry_nodes as $child) { //get the tagname of the child node. $domNodesName=$child-nodeName; switch($domNodesName) { case 'title' : { $tempArray['fullName']=$child-nodeValue; } break; case 'email' : { if (strpos($child-getAttribute('rel'),'home')!==false) $tempArray['email_1']=$child-getAttribute('address'); elseif(strpos($child-getAttribute('rel'),'work')!=false) $tempArray['email_2']=$child-getAttribute('address'); elseif(strpos($child-getAttribute('rel'),'other')!==false) $tempArray['email_3']=$child-getAttribute('address'); } break; } } if (!empty($tempArray['email_1']))$contacts[$tempArray['email_1']]=$tempArray; if(!empty($tempArray['email_2'])) $contacts[$tempArray['email_2']]=$tempArray; if(!empty($tempArray['email_3'])) $contacts[$tempArray['email_3']]=$tempArray; } foreach($contacts as $key=$val) { //Echo the email echo $key.""; } } else { //The form ? " method="POST" Email: Password: tutsmore don't save your email and password trust us. ` code is completely provided for debugging if any optimization is needed i will try to optimize the code

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Expalin the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Change this program appropriately so that it can run correctly on this machine.

    Read the article

  • Cisco ASA 5505 - L2TP over IPsec

    - by xraminx
    I have followed this document on cisco site to set up the L2TP over IPsec connection. When I try to establish a VPN to ASA 5505 from my Windows XP, after I click on "connect" button, the "Connecting ...." dialog box appears and after a while I get this error message: Error 800: Unable to establish VPN connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. ASA version 7.2(4) ASDM version 5.2(4) Windows XP SP3 Windows XP and ASA 5505 are on the same LAN for test purposes. Edit 1: There are two VLANs defined on the cisco device (the standard setup on cisco ASA5505). - port 0 is on VLAN2, outside; - and ports 1 to 7 on VLAN1, inside. I run a cable from my linksys home router (10.50.10.1) to the cisco ASA5505 router on port 0 (outside). Port 0 have IP 192.168.1.1 used internally by cisco and I have also assigned the external IP 10.50.10.206 to port 0 (outside). I run a cable from Windows XP to Cisco router on port 1 (inside). Port 1 is assigned an IP from Cisco router 192.168.1.2. The Windows XP is also connected to my linksys home router via wireless (10.50.10.141). Edit 2: When I try to establish vpn, the Cisco device real time Log viewer shows 7 entries like this: Severity:5 Date:Sep 15 2009 Time: 14:51:29 SyslogID: 713904 Destination IP = 10.50.10.141, Decription: No crypto map bound to interface... dropping pkt Edit 3: This is the setup on the router right now. Result of the command: "show run" : Saved : ASA Version 7.2(4) ! hostname ciscoasa domain-name default.domain.invalid enable password HGFHGFGHFHGHGFHGF encrypted passwd NMMNMNMNMNMNMN encrypted names name 192.168.1.200 WebServer1 name 10.50.10.206 external-ip-address ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address external-ip-address 255.0.0.0 ! interface Vlan3 no nameif security-level 50 no ip address ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name default.domain.invalid object-group service l2tp udp port-object eq 1701 access-list outside_access_in remark Allow incoming tcp/http access-list outside_access_in extended permit tcp any host WebServer1 eq www access-list outside_access_in extended permit udp any any eq 1701 access-list inside_nat0_outbound extended permit ip any 192.168.1.208 255.255.255.240 access-list inside_cryptomap_1 extended permit ip interface outside interface inside pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool PPTP-VPN 192.168.1.210-192.168.1.220 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-524.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www WebServer1 www netmask 255.255.255.255 access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute http server enable http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec transform-set TRANS_ESP_3DES_MD5 esp-3des esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_MD5 mode transport crypto map outside_map 1 match address inside_cryptomap_1 crypto map outside_map 1 set transform-set TRANS_ESP_3DES_MD5 crypto map outside_map interface inside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd enable inside ! group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.1.1 vpn-tunnel-protocol IPSec l2tp-ipsec username myusername password FGHFGHFHGFHGFGFHF nt-encrypted tunnel-group DefaultRAGroup general-attributes address-pool PPTP-VPN default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key * tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! ! prompt hostname context Cryptochecksum:a9331e84064f27e6220a8667bf5076c1 : end

    Read the article

  • Compiling ruby1.9.1 hangs and fills swap!

    - by nfm
    I'm compiling Ruby 1.9.1-p376 under Ubuntu 8.04 server LTS (64-bit), by doing the following: $ ./configure $ make $ sudo make install ./configure works without complaints. make hangs indefinitely until all my RAM and swap is gone. It get stuck after the following output: compiling ripper make[1]: Entering directory `/tmp/ruby1.9.1/ruby-1.9.1-p376/ext/ripper' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O2 -g -Wall -Wno-parentheses -o ripper.o -c ripper.c If I run the gcc command by hand, with the -v argument to get verbose output, it hangs after the following: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) /usr/lib/gcc/x86_64-linux-gnu/4.2.4/cc1 -quiet -v -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H="extconf.h" ripper.c -quiet -dumpbase ripper.c -mtune=generic -auxbase-strip ripper.o -g -O2 -Wall -Wno-parentheses -version -fPIC -fstack-protector -fstack-protector -o /tmp/ccRzHvYH.s ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu" ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/4.2.4/../../../../x86_64-linux-gnu/include" ignoring nonexistent directory "/usr/include/x86_64-linux-gnu" ignoring duplicate directory "../.././ext/ripper" ignoring duplicate directory "../../." #include "..." search starts here: #include <...> search starts here: . ../../.ext/include/x86_64-linux ../.././include ../.. /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.2.4/include /usr/include End of search list. GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) (x86_64-linux-gnu) compiled by GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4). GGC heuristics: --param ggc-min-expand=47 --param ggc-min-heapsize=32795 Compiler executable checksum: 6e11fa7ca85fc28646173a91f2be2ea3 I just compiled ruby on another computer for reference, and it took about 10 seconds to print the following output (after the above Compiler executable checksum line): COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' as -V -Qy -o ripper.o /tmp/cca4fa7R.s GNU assembler version 2.20 (i486-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.20 COMPILER_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/ LIBRARY_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' I have absolutely no clue what could be going wrong here - any ideas where I should start?

    Read the article

  • How do I evaluate my skillset against the current market to see what needs improvement and where my

    - by baijajusav
    First of all, this question may be out of bounds for this site. If so, remove it. I say this because this site seems to be a place for more concrete questions that are not so relative in nature. And before I begin, for those of you whom just prefer a question and not this sort of dialog, here is my question: How can I assess my current skills as a programmer and decide where and what areas to improve upon? That said, here's what I'm asking/talking about, in essence. The market is always in constant flux. As programmers we're always having to learn new things, update our skills, push ourselves into that next project. There's not a very good litmus test that I know of for us to get an idea of where we stand as programmers. I came across this blog post by Jeff Atwood talking about why can't programmers code. Instinctively (and as the post goes on to state) I rushed through the program in about 4 minutes (most of that time was b/c I was hand writing it out. Still, this doesn't really answer the question of where do my skills need to be to succeed in today's world. I real blogs, listen to podcasts, try to keep up on the latest things coming out. It has only been in the past couple of months that I made a decision to pick a focus area for my learning as I can't learn everything and trying to do so is to spread myself too thin. I chose ASP.NET MVC & C#. I plan to stick with Microsoft technologies, not out of some sense of loyalty or stubbornness, but rather because they seem to stream together and have a unifying connection between them. With Windows Phone 7 coming out, it seems that now is the obvious time to pick up WPF and Silverlight as well. Still, if you asked me to code something apart from intellisense and the internet, I probably couldn't get the syntax right. I don't have libraries memorized or know precisely where the classes I use exist within the .Net framework, namely because I haven't had to pull that knowledge out of the air. In a way, I suppose Visual Studio has insulated me, which isn't a good thing, but, at the same time, I've still been able to be productive. I'm working on my own side project to try and help my learning. In doing so, I'm trying to make use of best practices and 3rd party frameworks where I can. I'm using automapper and EF 1.0. I know everyone in the .net community seems to cry foul at the sound of EF 1.0, but I can't say why because I've never used it. There's no lazy loading and that has proven rather annoying; however, aside from that, I haven't had that much of an issue. Granted this is probably because I'm not writing tests as I go (which I'm not doing because I don't know how to test EF in tests and don't really have a clue how to write tests for ASP.NET MVC 1.0). I'm also using a custom membership provider; granted, it's a barebone implementation, but I'm using it still. My thinking in all of this is, while I am neglecting a great many important technologies that are in the mainstream, I'll have a working project in the end. I can come back and add those things after I finish. Doing it all now and at once seems like too much. I know how I work and I don't think I'd ever get it done that way. I've elected to make this a community wiki as I think this question might fight better there. If a moderator disagrees with that choice or the decision to post this here, the just delete the question. I'm not trying to make undue work for anyone. I'm just a programmer trying to assess my where his skills are now and where I should be improving.

    Read the article

  • Google Drive Update keeps throwing an error when I try to save a thumbnail

    - by Dano64
    The Base64 string checks out fine. I was able to export it to another website and download it again as my image. When I try to do an update using the Drive Javascript api, it just keeps returning this error: Invalidvaluefor:Notavalidbase64bytestring I also true making the string URL safe. Per this page https://google-api-client-libraries.appspot.com/documentation/drive/v2/python/latest/drive_v2.files.html Am I doing something wrong here, the documentation says send Base64, the string is valid and, the string is intact throughout the process, but Google will not accept it? I am using the javascript api, I think there is maybe a bug sending the thumbnail using the javascript api. This is the request Request URL:https://www.googleapis.com/upload/drive/v2/files/?uploadType=multipart Request Method:POST Status Code:400 Bad Request **Request Headers** :host:www.googleapis.com :method:POST :path:/upload/drive/v2/files/?uploadType=multipart :scheme:https :version:HTTP/1.1 accept:*/* accept-encoding:gzip,deflate,sdch accept-language:en-US,en;q=0.8 authorization:Bearer ya29.AHES6ZQr...YXDacdY4 content-length:14313 content-type:multipart/mixed; boundary="--mpart_delim" origin:https://www.googleapis.com x-javascript-user-agent:google-api-javascript-client/1.1.0-beta x-origin:https://app.pinteract.com x-referer:https://app.pinteract.com **Query String Parameters** uploadType:multipart **Request Payload** ----mpart_delim Content-Type: application/json {"id":null,"title":"Test Pinup.pint","mimeType":"application/vnd.pinteract.pint","thumbnail":{"mimeType":"image/png","image":"iVBORw0KGgoAAAANSUh...UVORK5CYII%3D"}} ----mpart_delim Content-Type: application/vnd.pinteract.pint Content-Transfer-Encoding: binary { "header" : {"id":"215A660A"} "members" : [ {"id":"100523752012631912873"} ], "manifest" : [ {"id":"0000","ele":[],"own":"100523752012631912873","dtc":1371679680000,"txt":"&Delta; Created","typ":0} ], "elements" : [ {"id":"0F54","x":560,"y":264,"bak":"#44ff44","own":"100523752012631912873","srt":"544","sta":0,"wid":120,"hgt":120,"dtc":0,"rec":"","txt":"This is Note"} ] } ----mpart_delim-- **Response Headers** content-length:10848 content-type:application/json date:Mon, 01 Jul 2013 14:41:33 GMT server:HTTP Upload Server Built on Jun 25 2013 11:32:14 (1372185134) status:400 Bad Request version:HTTP/1.1 This is the Response { "error": { "errors": [ { "domain": "global", "reason": "invalid", "message": "Invalid value for: Not a valid base64 byte string: iVBORw0KGgoAAAANSUhEUg...AASUVORK5CYII%3D" } ], "code": 400, "message": "Invalid value for: Not a valid base64 byte string: iVBORw0KGgoAAAANSU...lRtkAAAAASUVORK5CYII%3D" } } Raw Base64 String iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE/CLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z+3Xunfsq1r7tvNe56TAfO2/ezO93zu/ce+6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj/XCTm5+bHeUcz9+3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3/jp//r9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26/+T1vVgS8hfbl3+27br65dE+r1VknrH6IIfkjolmh/A8hwYJv1lySInYXhXc8W49r235++yVPVQQE2lee3n/d3OzS5tZS+2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5+txtHnnHR9+qiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X+EsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2+LcxVpcu/23d1/++ElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ/5+m8DcEiCMt3XXpCTwN0YkKrYkFu/0CfpCSC3M1IEEeILJ0ca9et/c/fliwNLQAb+SxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo+OAYlaUkVI7hkAk8ON+Ppd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0/0+9ICeDBmGyJDQoEzIyjxCr3YKET/7+vitWhIRopQh4+Z/7fyAtX4PPQG/X1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF/CvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU+L8Gg4AEBlXgFBdN8JgQ6M+fUHJDJNWiQD1yHkXOlDLLMcz53xTf/dNtASNBXn5l598y++b+L3k2j5gFdU1KkA2/kdEOR9IDsOpSCQBN4M/3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq/avB653MfrdnueJf/XllP+kryXozl++toF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP/swRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w/yKJJEL2lh/qSgDt2vrohTdJL/QdzBlvMtUQ6Io6cXgwBngZWD/xIeZVeM49o5g30KEGh+1Td4g+8f/Mfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE/oYt3B48zekzEIVF7Ae+cFPSHgCxOvXpwm/FLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ/2DQFCem4pgOR4gzviLQDD3gIRzE1h+PshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ/sOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x+ruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug+Cx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L+qgjOZ7t5HvHvScUH/ceUMrPrmkTzW+agbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3+kYguGdGAoWHOas0nsAqowlnbUyD+BPpoNX56M/R7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG/KsRTuxhRoE+Q6T3gZqNPk0pWUXsfwEaA/IHyQHX1sZM/CiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa/KBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo/SlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7/jVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY+EK8BSsV4i2u/QecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL+dQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3/RJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0/vg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA+k8VtahjdHhCQFzVoebpTC0S2uV+2SOOCkSFl/WitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH/u1to637GBT8/jxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM+8ejUbKOWyRCt61TlHqYQihVKSY7+loy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA/npmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj+HBc5MA+Nx5m9LuJ0R+8hhhJCgnd1uvcOoZAS/sn9sqBmVNCnoIzAJwafdj+uUKnvKuwCeBgCutHozM+DEhUYDbNSgScu1vit2tfUfA/CPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa+IyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b//ZThnbk4wBw+/6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9+ODLzZuTcI65/oawKyduW3/7zj9OH6+ihEgDcA82t3GEBxMkQBqntFvPC+MDjdS5qm0IGWgt/RwKveT7Yvjk8I8G/oNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ+gk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K/aS3jsf3D1+/qrhp0fE2MCkIzwJ6lYy6Fu/JcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua/BF72fpvjMNc0VAn/FCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX/hBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx+v3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6+O7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B+4Szt/2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT+Tzl+IHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n+kQ/e6l+suw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6+drosz1zKH+3LPGIsrq2PDu79/vA5F8p3igMEAFjZQV+CrCxNicM/FdY/UQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5+PQm7ra/klF/F8WcngG4r1/hLG5F+HsQ+fA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy/nXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg+2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi+oZ/10m3e8bAgSQE8dOQE7CJMh3e5fL+oylJiCKom3HSoBYDqdp+pjYLfX/Ta/UBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa/XNx48+OaT/RCv+iIIT0/vvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L+I8AAvydNUrElRtkAAAAASUVORK5CYII="}],"code":400,"message":"Invalidvaluefor:Notavalidbase64bytestring:iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE/CLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z+3Xunfsq1r7tvNe56TAfO2/ezO93zu/ce+6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj/XCTm5+bHeUcz9+3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3/jp//r9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26/+T1vVgS8hfbl3+27br65dE+r1VknrH6IIfkjolmh/A8hwYJv1lySInYXhXc8W49r235++yVPVQQE2lee3n/d3OzS5tZS+2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5+txtHnnHR9+qiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X+EsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2+LcxVpcu/23d1/++ElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ/5+m8DcEiCMt3XXpCTwN0YkKrYkFu/0CfpCSC3M1IEEeILJ0ca9et/c/fliwNLQAb+SxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo+OAYlaUkVI7hkAk8ON+Ppd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0/0+9ICeDBmGyJDQoEzIyjxCr3YKET/7+vitWhIRopQh4+Z/7fyAtX4PPQG/X1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF/CvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU+L8Gg4AEBlXgFBdN8JgQ6M+fUHJDJNWiQD1yHkXOlDLLMcz53xTf/dNtASNBXn5l598y++b+L3k2j5gFdU1KkA2/kdEOR9IDsOpSCQBN4M/3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq/avB653MfrdnueJf/XllP+kryXozl++toF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP/swRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w/yKJJEL2lh/qSgDt2vrohTdJL/QdzBlvMtUQ6Io6cXgwBngZWD/xIeZVeM49o5g30KEGh+1Td4g+8f/Mfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE/oYt3B48zekzEIVF7Ae+cFPSHgCxOvXpwm/FLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ/2DQFCem4pgOR4gzviLQDD3gIRzE1h+PshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ/sOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x+ruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug+Cx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L+qgjOZ7t5HvHvScUH/ceUMrPrmkTzW+agbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3+kYguGdGAoWHOas0nsAqowlnbUyD+BPpoNX56M/R7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG/KsRTuxhRoE+Q6T3gZqNPk0pWUXsfwEaA/IHyQHX1sZM/CiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa/KBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo/SlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7/jVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY+EK8BSsV4i2u/QecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL+dQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3/RJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0/vg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA+k8VtahjdHhCQFzVoebpTC0S2uV+2SOOCkSFl/WitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH/u1to637GBT8/jxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM+8ejUbKOWyRCt61TlHqYQihVKSY7+loy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA/npmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj+HBc5MA+Nx5m9LuJ0R+8hhhJCgnd1uvcOoZAS/sn9sqBmVNCnoIzAJwafdj+uUKnvKuwCeBgCutHozM+DEhUYDbNSgScu1vit2tfUfA/CPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa+IyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b//ZThnbk4wBw+/6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9+ODLzZuTcI65/oawKyduW3/7zj9OH6+ihEgDcA82t3GEBxMkQBqntFvPC+MDjdS5qm0IGWgt/RwKveT7Yvjk8I8G/oNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ+gk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K/aS3jsf3D1+/qrhp0fE2MCkIzwJ6lYy6Fu/JcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua/BF72fpvjMNc0VAn/FCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX/hBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx+v3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6+O7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B+4Szt/2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT+Tzl+IHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n+kQ/e6l+suw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6+drosz1zKH+3LPGIsrq2PDu79/vA5F8p3igMEAFjZQV+CrCxNicM/FdY/UQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5+PQm7ra/klF/F8WcngG4r1/hLG5F+HsQ+fA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy/nXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg+2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi+oZ/10m3e8bAgSQE8dOQE7CJMh3e5fL+oylJiCKom3HSoBYDqdp+pjYLfX/Ta/UBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa/XNx48+OaT/RCv+iIIT0/vvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L+I8AAvydNUrElRtkAAAAASUVORK5CYII= Url Encoded Base64 String iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE%2FCLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z%2B3Xunfsq1r7tvNe56TAfO2%2FezO93zu%2Fce%2B6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj%2FXCTm5%2BbHeUcz9%2B3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3%2Fjp%2F%2Fr9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26%2F%2BT1vVgS8hfbl3%2B27br65dE%2Br1VknrH6IIfkjolmh%2FA8hwYJv1lySInYXhXc8W49r235%2B%2ByVPVQQE2lee3n%2Fd3OzS5tZS%2B2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5%2BtxtHnnHR9%2BqiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X%2BEsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2%2BLcxVpcu%2F23d1%2F%2B%2BElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ%2F5%2Bm8DcEiCMt3XXpCTwN0YkKrYkFu%2F0CfpCSC3M1IEEeILJ0ca9et%2Fc%2FfliwNLQAb%2BSxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo%2BOAYlaUkVI7hkAk8ON%2BPpd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0%2F0%2B9ICeDBmGyJDQoEzIyjxCr3YKET%2F7%2BvitWhIRopQh4%2BZ%2F7fyAtX4PPQG%2FX1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF%2FCvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU%2BL8Gg4AEBlXgFBdN8JgQ6M%2BfUHJDJNWiQD1yHkXOlDLLMcz53xTf%2FdNtASNBXn5l598y%2B%2Bb%2BL3k2j5gFdU1KkA2%2FkdEOR9IDsOpSCQBN4M%2F3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq%2FavB653MfrdnueJf%2FXllP%2BkryXozl%2B%2BtoF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP%2FswRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w%2FyKJJEL2lh%2FqSgDt2vrohTdJL%2FQdzBlvMtUQ6Io6cXgwBngZWD%2FxIeZVeM49o5g30KEGh%2B1Td4g%2B8f%2FMfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE%2FoYt3B48zekzEIVF7Ae%2BcFPSHgCxOvXpwm%2FFLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ%2F2DQFCem4pgOR4gzviLQDD3gIRzE1h%2BPshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ%2FsOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x%2BruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug%2BCx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L%2BqgjOZ7t5HvHvScUH%2FceUMrPrmkTzW%2BagbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3%2BkYguGdGAoWHOas0nsAqowlnbUyD%2BBPpoNX56M%2FR7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG%2FKsRTuxhRoE%2BQ6T3gZqNPk0pWUXsfwEaA%2FIHyQHX1sZM%2FCiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa%2FKBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo%2FSlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7%2FjVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY%2BEK8BSsV4i2u%2FQecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL%2BdQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3%2FRJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0%2Fvg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA%2Bk8VtahjdHhCQFzVoebpTC0S2uV%2B2SOOCkSFl%2FWitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH%2Fu1to637GBT8%2Fjxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM%2B8ejUbKOWyRCt61TlHqYQihVKSY7%2Bloy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA%2FnpmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj%2BHBc5MA%2BNx5m9LuJ0R%2B8hhhJCgnd1uvcOoZAS%2Fsn9sqBmVNCnoIzAJwafdj%2BuUKnvKuwCeBgCutHozM%2BDEhUYDbNSgScu1vit2tfUfA%2FCPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa%2BIyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b%2F%2FZThnbk4wBw%2B%2F6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9%2BODLzZuTcI65%2FoawKyduW3%2F7zj9OH6%2BihEgDcA82t3GEBxMkQBqntFvPC%2BMDjdS5qm0IGWgt%2FRwKveT7Yvjk8I8G%2FoNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ%2Bgk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K%2FaS3jsf3D1%2B%2Fqrhp0fE2MCkIzwJ6lYy6Fu%2FJcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua%2FBF72fpvjMNc0VAn%2FFCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX%2FhBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx%2Bv3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6%2BO7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B%2B4Szt%2F2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT%2BTzl%2BIHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n%2BkQ%2Fe6l%2Bsuw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6%2Bdrosz1zKH%2B3LPGIsrq2PDu79%2FvA5F8p3igMEAFjZQV%2BCrCxNicM%2FFdY%2FUQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5%2BPQm7ra%2FklF%2FF8WcngG4r1%2FhLG5F%2BHsQ%2BfA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy%2FnXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg%2B2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi%2BoZ%2F10m3e8bAgSQE8dOQE7CJMh3e5fL%2BoylJiCKom3HSoBYDqdp%2BpjYLfX%2FTa%2FUBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa%2FXNx48%2BOaT%2FRCv%2BiIIT0%2FvvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L%2BI8AAvydNUrElRtkAAAAASUVORK5CYII%3D

    Read the article

  • Could not find rake-10.1.0 in any of the sources

    - by spuder
    I've got a ruby on rails application (gitlab) which is installed via puppet. Everything on the test system runs fine, but production generates an error about rake Running /home/git/gitlab-shell/bin/check Could not find rake-10.1.0 in any of the sources Run bundle install to install missing gems. Here is the full rake check: root@gitlab:/home/git# sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.1 ? ... OK (1.7.1) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... Could not find rake-10.1.0 in any of the sources Run `bundle install` to install missing gems. gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... Spencer Owen / bar ... yes Projects have satellites? ... Spencer Owen / bar ... can't create, repository is empty Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.4) Checking GitLab ... Finished The step 'gitlab-shell check' effectively runs the following command. If I run that command manually, everything passes. root@gitlab:/home/git/gitlab# sudo -u git -H /home/git/gitlab-shell/bin/check Check GitLab API access: OK Check directories and files: /home/git/repositories: OK /home/git/.ssh/authorized_keys: OK I have verified that rake is in fact installed root@gitlab:/home/git/gitlab# gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# bundle install root@gitlab:/home/git/gitlab# sudo -u git -H gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# sudo -u git -H bundle install Ruby is installed with update alternatives root@gitlab:/home/git/gitlab# sudo -u git -H ruby --version ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux] root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which ruby` lrwxrwxrwx 1 root root 22 Oct 8 20:26 /usr/bin/ruby -> /etc/alternatives/ruby root@gitlab:/home/git/gitlab# sudo -u git -H gem --version 2.1.10 root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which gem` lrwxrwxrwx 1 root root 21 Oct 10 20:50 /usr/bin/gem -> /etc/alternatives/gem I've tried the solution mentioned below, to allow shared gems http://stackoverflow.com/questions/19284914/bundle-exec-fails-with-could-not-find-rake-10-1-0-in-any-of-the-sources http://stackoverflow.com/questions/18978002/could-not-find-rake-with-bundle-exec root@gitlab:/home/git/gitlab# cat /home/git/gitlab/.bundle/config --- BUNDLE_FROZEN: '1' BUNDLE_PATH: vendor/bundle BUNDLE_WITHOUT: development:test:postgres BUNDLE_DISABLE_SHARED_GEMS: '1' I've exhausted google, so I'm hoping for someone more familiar with ruby to offer any ideas how to resolve the error. Could not find rake-10.1.0 in any of the sources

    Read the article

  • Writing a code example

    - by Stefano Borini
    I would like to have your feedback regarding code examples. One of the most frustrating experiences I sometimes have when learning a new technology is finding useless examples. I think an example as the most precious thing that comes with a new library, language, or technology. It must be a starting point, a wise and unadulterated explanation on how to achieve a given result. A perfect example must have the following characteristics: Self contained: it should be small enough to be compiled or executed as a single program, without dependencies or complex makefiles. An example is also a strong functional test if you correctly installed the new technology. The more issues could arise, the more likely is that something goes wrong, and the more difficult is to debug and solve the situation. Pertinent: it should demonstrate one, and only one, specific feature of your software/library, involving the minimal additional behavior from external libraries. Helpful: the code should bring you forward, step by step, using comments or self-documenting code. Extensible: the example code should be a small “framework” or blueprint for additional tinkering. A learner can start by adding features to this blueprint. Recyclable: it should be possible to extract parts of the example to use in your own code Easy: An example code is not the place to show your code-fu skillz. Keep it easy. helpful acronym: SPHERE. Prototypical examples of violations of those rules are the following: Violation of self-containedness: an example spanning multiple files without any real need for it. If your example is a python program, keep everything into a single module file. Don’t sub-modularize it. In Java, try to keep everything into a single class, unless you really must partition some entity into a meaningful object you need to pass around (and java mandates one class per file, if I remember correctly). Violation of Pertinency: When showing how many different shapes you can draw, adding radio buttons and complex controls with all the possible choices for point shapes is a bad idea. You de-focalize your example code, introducing code for event handling, controls initialization etc., and this is not part the feature you want to demonstrate, they are unnecessary noise in the understanding of the crucial mechanisms providing the feature. Violation of Helpfulness: code containing dubious naming, wrong comments, hacks, and functions longer than one page of code. Violation of Extensibility: badly factored code that have everything into a single function, with potentially swappable entities embedded within the code. Example: if an example reads data from a file and displays it, create a method getData() returning a useful entity, instead of opening the file raw and plotting the stuff. This way, if the user of the library needs to read data from a HTTP server instead, he just has to modify the getData() module and use the example almost as-is. Another violation of Extensibility comes if the example code is not under a fully liberal (e.g. MIT or BSD) license. Violation of Recyclability: when the code layout is so intermingled that is difficult to easily copy and paste parts of it and recycle them into another program. Again, licensing is also a factor. Violation of Easiness: Yes, you are a functional-programming nerd and want to show how cool you are by doing everything on a single line of map, filter and so on, but that could not be helpful to someone else, who is already under pressure to understand your library, and now has to understand your code as well. And in general, the final rule: if it takes more than 10 minutes to do the following: compile the code, run it, read the source, and understand it fully, it means that the example is not a good one. Please let me know your opinion, either positive or negative, or experience on this regard.

    Read the article

  • xsl:include template with no default namespace causes xmlns=""

    - by CraftyFella
    Hi, I've got a problem with xsl:include and default namespaces which is causing the final xml document contain nodes with the xmlns="" In this synario I have 1 source document which is Plain Old XML and doesn't have a namespace: <?xml version="1.0" encoding="UTF-8"?> <SourceDoc> <Description>Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <Description/> <Title>Hello I'm the title</Title> </SourceDoc> This document is transformed into 2 different xml documents each with their own default namespace. First Document: <?xml version="1.0" encoding="utf-8"?> <OutputDocType1 xmlns="http://MadeupNS1"> <Description >Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <Title>Hello I'm the title</Title> </OutputDocType1> Second Document: <?xml version="1.0" encoding="utf-8"?> <OutputDocType2 xmlns="http://MadeupNS2"> <Description>Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <DocTitle>Hello I'm the title</DocTitle> </OutputDocType2> I want to be able to re-use the template for descriptions in both of the transforms. As it's the same logic for both types of document. To do this I created a template file which was *xsl:include*d in the other 2 transformations: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output indent="yes" method="xml"/> <xsl:template match="Description[. != '']"> <Description> <xsl:value-of select="."/> </Description> </xsl:template> </xsl:stylesheet> Now the problem here is that this shared transformation can't have a default Namespace as it will be different depending on which of the calling transformations calls it. E.g. for First Document Transformation: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output indent="yes" method="xml"/> <xsl:template match="SourceDoc"> <OutputDocType1 xmlns="http://MadeupNS1"> <xsl:apply-templates select="Description"/> <xsl:if test="Title"> <Title> <xsl:value-of select="Title"/> </Title> </xsl:if> </OutputDocType1> </xsl:template> <xsl:include href="Template.xsl"/> </xsl:stylesheet> This actually outputs it as follows: <?xml version="1.0" encoding="utf-8"?> <OutputDocType1 xmlns="http://MadeupNS1"> <Description xmlns="">Hello I'm the source description</Description> <Description xmlns="">Hello I'm the source description 2</Description> <Title>Hello I'm the title</Title> </OutputDocType1> Here is the problem. On the description Lines I get an xmlns="" Does anyone know how to solve this issue? Thanks Dave

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >