Search Results

Search found 55134 results on 2206 pages for 'argument error'.

Page 704/2206 | < Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >

  • Adding a comma to a resource name in Microsoft Project

    - by John Paul Cook
    Microsoft Project does not allow a comma to be added to a resource name. In healthcare, the norm is to refer to people using the pattern of Name, Title which in my case is John Cook, RN. Not all commas are equal. By substituting a different comma for the one Project doesn’t like, it’s possible to add a comma to a resource name. Figure 1. Error message after trying to add a comma to a resource name in Microsoft Project 2013. The error message refers to “the list separator character” that is commonly...(read more)

    Read the article

  • Juju bootstrap fails with "Temporary failure in name resolution" using Amazon AWS

    - by Will
    I have followed the instructions over at https://juju.ubuntu.com/docs/config-aws.html to try and setup myself with a juju environment. It all seemed to setup alright. SSH keys, Gererate config, repository, adding access id and secret keys to environments.yaml file. Although my key file from aws IAM management console was called credentials.csv rather than rootkey, I couldn't find that link described in the documentation. When I give the command juju bootstrap in the testing page. It fails giving me the error: juju bootstrap ERROR Get https://s3-us-west-1.amazonaws.com/juju-gobblygookmynukmbersinhere/provider-state: lookup s3-us-west-1.amazonaws.com: Temporary failure in name resolution (note I just replaced my numbers for this posting my actual terminal has my numbers in. This is my first attempt at any ec2 work so I have gone in and created new IAM profiles. What have I done wrong? Any help would be great. I think I'm in over my head!

    Read the article

  • New release of &quot;OLAP PivotTable Extensions&quot;

    - by Luca Zavarella
    For those who are not familiar with this add-in, the OLAP PivotTable Extensions add features of interest to Excel 2007 or 2010 PivotTables pointing to an OLAP cube in Analysis Services. One of these features I like very much, is to know the MDX query code associated with the pivot used at that time in Excel: You can find all the details here: http://olappivottableextend.codeplex.com/ It was recently released a new version of the add-in (version 0.7.4), which does not introduce any new features, but fixes a significant bug: Release 0.7.4 now properly handles languages but introduces no new features. International users who run a different Windows language than their Excel UI language may be receiving an error message when they double click a cell and perform drillthrough which reads: "XML for Analysis parser: The LocaleIdentifier property is not overwritable and cannot be assigned a new value". This error was caused by OLAP PivotTable Extensions in some situations, but release 0.7.4 fixes this problem. Enjoy!

    Read the article

  • How can I enable Unity 3d after installing Bumblebee? GLX Problems

    - by ashley
    I'm new to Ubuntu, I'm running 12.04 64 Bit on a Dell XPS L207x with a Nvidia GT555M card. From what I could understand online I needed to install Bumblebee to get the most out of the Optimus system and better battery life. I can test the Bumblebee is working by running optirun glxgears for example. If I run just glxgears then I get the following error Error: couldn't get an RGB, Double-buffered visual. I'm also unable to run Unity 3d, which I would very much like. I'd greatly appreciate any and all help, please be gentle.

    Read the article

  • How to install a Logitech c310 webcam?

    - by Teja
    I bought a new Logitech C310 webcam.i dont know how to install webcam on ubuntu.am trying to install this software through the terminal window. wine /media/LWS_2_0/Setup.exe but its showing as fixme:ole:TLB_ReadTypeLib Header type magic 0x00905a4d not supported. err:ole:TLB_ReadTypeLib Loading of typelib L"Z:\\media\\LWS_2_1\\MSetup.exe" failed with error 0 and its opening a window with data as "We have detected you have connected your webcam to a USB1.1 port,for the best performance and full feature set,we suggest using a USB2.0 port" I tried this all the USB ports available in my system.its showing the same error. Please tell me how to install this webcam on ubuntu 11.10. Thanx for reading.

    Read the article

  • Bumblebee : Module bbswitch could not be loaded

    - by StepTNT
    I've upgraded to 12.04 and I had to switch from Ironhide to the latest version of Bumblebee. Now, when I try to run bumblebeed, I get this error: FATAL: Module bbswitch not found. [ERROR]Module bbswitch could not be loaded (timeout?) [WARN]No switching method available. The dedicated card will always be on. I don't really need to use the secondary VGA on Kubuntu, so I would like to find a way to definitely shut the discrete GPU down and avoid wasting battery. I can't disable it from the BIOS because I use it on Windows. My card is an nvidia 540M.

    Read the article

  • Mounting Samsung Galaxy S2 via USB on Ubuntu 13.04

    - by argvar
    Connecting my Samsung Galaxy S2 to Windows 7 worked seamlessly. Now that I'm on Ubuntu 13.04 I'd like to access my phone's drive via the USB cable, but that's not working. When I do plug it in I get a error message in Ubuntu: Unable to mount SAMSUNG_Android Unable to open MTP device '[usb:002,008]' Then when I unplug it I see a error message on the phone: Attention Unable to find software on your PC that can recognize your device. Service pack 3, Windows Media Player, version 10 or higher, for Windows XP or Android FIle Transfer for Mac OS must be installed. You can download and install PC Kies from http://www.samsung.com/kies in order to sync data with your device, back up data, and upgrade your device (Windows and Mac OS are supported) What can I do here to fix this? I don't want to use any special software with the phone, just access the phone's drive.

    Read the article

  • Install squid3 + and eCAP module

    - by Hacker
    I want to install squid3 and eCAP module on my latest ubuntu. Squid installed well with command i followed these instructions. http://e-healthexpert.org/node/431 sudo apt-get install squid3 but how do i configure it to enable eCAP module ?. I need to give something like --enable-ecap option while installation, which i could not. Then i tried to manually install using http://code.google.com/p/squid-ecap-gzip/source/browse/wiki/Installation.wiki?r=24 but while installing squid3 it gives error. i.e. command sudo make gives some g++ error. SO how do i install it and configure for eCAP. please h

    Read the article

  • Unable to mount external hard drive

    - by arranjamesroche
    Basically I my 12.10 update crashed halfway through, so I've had to start again and put all my data onto an external HDD. It was all going fine until this came up : Error mounting /dev/sdb1 at /media/amy/CA47-8339: Command-line `mount -t "vfat" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush" "/dev/sdb1" "/media/amy/CA47-8339"' exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so When I tried to restore my info off the HDD. Now I'm stuck completely clueless as to how I'll get anything off the hard drive.

    Read the article

  • php5.4 + freebsd8.3+nginx can't get errors

    - by Alexey Perepechko
    I have a confusing behaviour. I can't get any error into log file or screen. I made a file index.php with content like this: "<?php a();".Normally, I will get message like this: "Call to undefined function a()" but when I called this script on my configuration I got nothing. Only white screen and empty logs. I checked all rights. I turned on all possible log file. Nothing. Please help me. My configuration is: freebsd 8.3-RELEASE PHP 5.4.7 (fpm-fcgi) nginx version: nginx/1.2.4 FPM-config [global] pid = run/php-fpm.pid error_log = log/php-fpm.log log_level = notice emergency_restart_threshold = 5 emergency_restart_interval = 2 process_control_timeout = 2 daemonize = yes events.mechanism = kqueue [puser] listen = /usr/local/www/host/tmp/php-fpm.sock; listen.backlog = -1 listen.allowed_clients = 127.0.0.1 listen.owner = puser listen.group = puser listen.mode = 0666 user = puser group = puser pm = dynamic pm.max_children = 30 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 50 slowlog = /usr/local/www/host/logs/fpm.log.slow request_slowlog_timeout = 1s rlimit_files = 1024 rlimit_core = 0 chroot = /usr/local/www/host/ catch_workers_output = yes env[HOSTNAME] = $HOSTNAME env[TMP] = /tmp env[TMPDIR] = /tmp env[TEMP] = /tmp php_admin_value[upload_tmp_dir] = /tmp php_admin_value[cgi.fix_pathinfo] = 0 php_admin_value[date.timezone]= 'Europe/Moscow' php_admin_value[memory_limit] = 320m php_admin_value[max_execution_time] = 180 php_admin_flag[log_errors] = on php_admin_value[error_log] = /logs/fpm-err.log php_admin_value[error_reporting] = 'E_ALL & ~E_NOTICE' php_admin_value[display_errors] = on php_admin_flag[display_startup_errors] = on NGINX config user www; worker_processes 2; worker_rlimit_nofile 80000; error_log /var/log/nginx_error.log notice; #pid logs/nginx.pid; events { worker_connections 2048; use kqueue; } http { server_tokens off; client_max_body_size 4m; include mime.types; default_type application/octet-stream; charset utf-8; sendfile on; keepalive_timeout 65; tcp_nopush on; tcp_nodelay on; log_format IP .$remote_addr.; log_format main '$remote_addr - $remote_user [$time_local] $request $request_body ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; reset_timedout_connection on; server { listen 80; server_name www.example.com; access_log /usr/local/www/host/logs/access.log main; error_log /usr/local/www/host/logs/error.log error; error_page 500 502 503 504 /errors/50x.html; error_page 404 /errors/404.html; root /usr/local/www/host/htdocs; index index.php index.html index.htm; location / { index index.html index.php; try_files $uri /index.php?$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_intercept_errors on; fastcgi_pass unix:/usr/local/www/host/tmp/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /htdocs$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED /htdocs$fastcgi_script_name; include /usr/local/etc/nginx/fastcgi_params; } } } PHP config (php.ini) [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = dl,system,exec,passthru,shell_exec disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On error_log = /var/log/php-fpm-error.log variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = Europe/Moscow [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] I need to get errors on display and detailed record in the error.log.

    Read the article

  • Unable to install on a Samsung 305v5a

    - by Antony
    Have used Ubuntu for years now. Bought a Samsung 305v5a-so2 laptop yesterday. It runs an AMD A8 quadcore. I have a CD of 10.04 and as I am not clear about whether to install 32 or 64 bit I thought I would run the trial of ubuntu from the cd to see it. After about 30m started getting Authentification Failure messages. Squashfs-error Unable to read fragment cash entry Then a zillion Buffer Logical error messages like 17,000+ Should I go download 11.10, in 32bit or go and try the 64bit. Really don't want to screw the new laptop already but aint gonna wanna work with w7 either. Thanks for any help

    Read the article

  • Design pattern for an ASP.NET project using Entity Framework

    - by MPelletier
    I'm building a website in ASP.NET (Web Forms) on top of an engine with business rules (which basically resides in a separate DLL), connected to a database mapped with Entity Framework (in a 3rd, separate project). I designed the Engine first, which has an Entity Framework context, and then went on to work on the website, which presents various reports. I believe I made a terrible design mistake in that the website has its own context (which sounded normal at first). I present this mockup of the engine and a report page's code behind: Engine (in separate DLL): public Engine { DatabaseEntities _engineContext; public Engine() { // Connection string and procedure managed in DB layer _engineContext = DatabaseEntities.Connect(); } public ChangeSomeEntity(SomeEntity someEntity, int newValue) { //Suppose there's some validation too, non trivial stuff SomeEntity.Value = newValue; _engineContext.SaveChanges(); } } And report: public partial class MyReport : Page { Engine _engine; DatabaseEntities _webpageContext; public MyReport() { _engine = new Engine(); _databaseContext = DatabaseEntities.Connect(); } public void ChangeSomeEntityButton_Clicked(object sender, EventArgs e) { SomeEntity someEntity; //Wrong way: //Get the entity from the webpage context someEntity = _webpageContext.SomeEntities.Single(s => s.Id == SomeEntityId); //Send the entity from _webpageContext to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context //Right(?) way: //Get the entity from the engine context someEntity = _engine.GetSomeEntity(SomeEntityId); //undefined above //Send the entity from the engine's context to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context } } Because the webpage has its own context, giving the Engine an entity from a different context will cause an error. I happen to know not to do that, to only give the Engine entities from its own context. But this is a very error-prone design. I see the error of my ways now. I just don't know the right path. I'm considering: Creating the connection in the Engine and passing it off to the webpage. Always instantiate an Engine, make its context accessible from a property, sharing it. Possible problems: other conflicts? Slow? Concurrency issues if I want to expand to AJAX? Creating the connection from the webpage and passing it off to the Engine (I believe that's dependency injection?) Only talking through ID's. Creates redundancy, not always practical, sounds archaic. But at the same time, I already recuperate stuff from the page as ID's that I need to fetch anyways. What would be best compromise here for safety, ease-of-use and understanding, stability, and speed?

    Read the article

  • Row Count Plus Transformation

    As the name suggests we have taken the current Row Count Transform that is provided by Microsoft in the Integration Services toolbox and we have recreated the functionality and extended upon it. There are two things about the current version that we thought could do with cleaning up Lack of a custom UI You have to type the variable name yourself In the Row Count Plus Transformation we solve these issues for you. Another thing we thought was missing is the ability to calculate the time taken between components in the pipeline. An example usage would be that you want to know how many rows flowed between Component A and Component B and how long it took. Again we have solved this issue. Credit must go to Erik Veerman of Solid Quality Learning for the idea behind noting the duration. We were looking at one of his packages and saw that he was doing something very similar but he was using a Script Component as a transformation. Our philosophy is that if you have to write or Copy and Paste the same piece of code more than once then you should be thinking about a custom component and here it is. The Row Count Plus Transformation populates variables with the values returned from; Counting the rows that have flowed through the path Returning the time in seconds between when it first saw a row come down this path and when it saw the final row. It is possible to leave both these boxes blank and the component will still work.   All input columns are passed through the transformation unaltered, you are not permitted to change or add to the inputs or outputs of this component. Optionally you can set the component to fire an event, which happens during the PostExecute phase of the execution. This can be useful to improve visibility of this information, such that it is captured in package logging, or can be used to drive workflow in the case of an error event. Properties Property Data Type Description OutputRowCountVariable String The name of the variable into which the amount of row read will be passed (Optional). OutputDurationVariable String The name of the variable into which the duration in seconds will be passed. (Optional). EventType RowCountPlusTransform.EventType The type of event to fire during post execute, included in which are the row count and duration values. RowCountPlusTransform.EventType Enumeration Name Value Description None 0 Do not fire any event. Information 1 Fire an Information event. Warning 2 Fire a Warning event. Error 3 Fire an Error event. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Count Plus Transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Count Plus Transformation for SQL Server 2005 Row Count Plus Transformation for SQL Server 2008 Row Count Plus Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.1.0.43 - Bug fix for duration. For long running processes the duration second count may have been incorrect. (8 Sep 2006) Version 1.1.0.42 - SP1 Compatibility Testing. Added the ability to raise an event with the count and duration data for easier logging or workflow. (18 Jun 2006) Version 1.0.0.1 - SQL Server 2005 RTM. Made available as general public release. (20 Mar 2006) Screenshot Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component?

    Read the article

  • Cannot escape character '@' in fstab file

    - by ubuntico
    I want to attach remote FTP directory as a local directory in my system. I will use curlftpfs line in the fstab file , but to login I have to pass my user name and password. The user name has a special character (@) and it needs to be escaped via octal ascii code. The octal ascii for '@' is 100. But when I try to enter the following into the FSTAB file, curlftpfs#myself\100myself.com:ftpPassword@ftp://ftp.mysite.com /mnt/somedir I get an error saying Error connecting to ftp: Couldn't resolve host 'myself.com:ftpPassword@ftp://ftp.mysite.com' The fstab does not recognize escaped symbol @ (\100) and thinks that the FTP site should start immediately after the @ symbol (just like I haven't escaped it). Can someone help? Why I cannot escape @, when it can be done for space character, for example?

    Read the article

  • B2B 11.1.1.2 no proxy support for FTP

    - by nestor.reyes
    Have you seen this error while trying to use a proxy for a delivery channel within B2B?Transport error: Proxy type must be defined when Proxy host has been specified. Proxy type must be defined when Proxy host has been specified.If so, you are not alone. FTP does not support proxy.  Also note the following entry in the release notes. http://download.oracle.com/docs/cd/E15523_01/relnotes.1111/e10133/b2b.htm#CHDJAFBC 15.1.45 FTP Listening Channel Does Not Have Proxy Support The Generic FTP-1.0 protocol for a listening channel does not have proxy support.The wording states listening channel, but it also applies for delivery channel.

    Read the article

  • Ubuntu 12.04.1 LTS USB not being detected after formatting with Startup Disk Creator

    - by Zach
    sudo fdisk -l lists the drive, however, I cannot find it in the file explorer. Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d871e Device Boot Start End Blocks Id System /dev/sda1 * 2048 486322175 243160064 83 Linux /dev/sda2 486324222 488396799 1036289 5 Extended /dev/sda5 486324224 488396799 1036288 82 Linux swap / Solaris Disk /dev/sdb: 8195 MB, 8195480064 bytes 253 heads, 62 sectors/track, 1020 cylinders, total 16006797 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00027ae4 Device Boot Start End Blocks Id System /dev/sdb1 * 62 15999719 M 7999829 c W95 FAT32 (LBA) Manually mounting it produces this error message :~$ sudo mount -t vfat /dev/sdb1 /media/external -ouiduid=1000,gid=1000,utf8,dmask=027,fmask=137 mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Is the usb toast?

    Read the article

  • Mplayer can't play *.wmv file

    - by Jimmy
    I have a problem when I use the mplayer to play *.wmv file on my ubuntu11.10. There are some error messages here. Could anyone can help me solve this problem. I use some keyword to search in Gooele, but I can't find the answer. Thank you. Playing testmovie.wmv. ASF file format detected. [asfheader] Audio stream found, -aid 1 [asfheader] Video stream found, -vid 2 VIDEO: [WMV3] 1280x720 24bpp 1000.000 fps 4000.0 kbps (488.3 kbyte/s) Load subtitles in ./ open: No such file or directory [ MGA] Couldn't open: /dev/mga_vid open: No such file or directory [MGA] Couldn't open: /dev/mga_vid [VO_TDFXFB] Can't open /dev/fb0: Permission denied. [VO_3DFX] Unable to open /dev/3dfx. [vdpau] Error when calling vdp_device_create_x11: 1 ========================================================================== Opening video decoder: [dmo] DMO video codecs DMO dll supports VO Optimizations 0 1 DMO dll might use previous sample when requested MPlayer interrupted by signal 11 in module: init_video_codec I am using xv as my video driver.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Set Up Of Common Name Of SSL Certificate To Protect Plesk Panel

    - by Cbomb
    A PCI Compliance scanner is balking that the self signed SSL certificate protecting secure access to Plesk Panel contains a name mismatch between the location of the Plesk Panel and the name on the certificate, namely the self-signed cert's name is "Parallels" and the domain to reach Plesk is 'ip address:8443'. So I figured I would go ahead and get a free SSL certificate to try to fiddle with this error. But when I generated the certificate I used my server domain name as the site name when I generated the certificate. So if I visit 'domain name:8443' all is fine, no ssl warning. But if I visit 'ip address:8443' (which I believe is what the scanner does) I get the certificate name mismatch error, Digicert's ssl checker says that the certificate name should be the ip address. Can I even generate a certificate whose common name is the ip address? I am tempted to say I should just do what the PCI scanner accepts, but what is really the correct common name to use? Anybody run into this issue before?

    Read the article

  • php is not working well on ubuntu 13.10 and mcrypt is missing in phpmyadmin

    - by mohamad
    I've upgraded from UBUNTU 13.04 to 13.10 but I can not work with php pages or phpmyadmin . I've tried this way to install lamp on ubuntu sudo apt-get install lamp-server^ phpmyadmin and I've done all of the configuration correctly after installation I've added this line Include /etc/phpmyadmin/apache.conf to /etc/apache2/apache2.conf then I restarted apache2 but in phpmyadmin on the bottom of the page is this error : The mcrypt extension is missing. Please check your PHP configuration I've check and mcrypt was in it , but in phpmyadmin it gives me error of missing . the other problem is on PHP pages it seems like there is no PHP and it's all html because lots of PHP lines are printed in textbox's like : <? echo $row['details']; ?> can anybody tell me what should I do ?

    Read the article

  • FTP file access problem

    - by Fahad Uddin
    I recently got a malware on my website. I have made the backup of the website on my computer and trying to wipe off my FTP. I am trying to delete the root folder but getting this error message on all of the malicious files, Response: 550 Could not delete index.php: Permission denied I am the sole administrator of the ftp so permission should not have been an issue. My host provider seems not to suffer from this problem as his websites are running well without any malware. I have also tried to change the root to 777 to see if the file permission change could help me delete the files but still I am getting the same error. Please help out. Thanks

    Read the article

  • Supertuxkart not able to start

    - by subeh.sharma
    Was working fine before i played around with NVIDIA drivers and ended up with this problem. I tried running it through the terminal and i see these messages: subsharm@subsharm-ThinkPad-T61:~$ supertuxkart Irrlicht Engine version 1.7.2 Linux 3.0.0-15-generic #25-Ubuntu SMP Mon Jan 2 17:44:42 UTC 2012 x86_64 [FileManager] Data files will be fetched from: '/usr/share/games/supertuxkart/' [IrrDriver] Creating NULL device Irrlicht Engine version 1.7.2 Linux 3.0.0-15-generic #25-Ubuntu SMP Mon Jan 2 17:44:42 UTC 2012 x86_64 [IrrDriver] Trying OpenGL rendering. [IrrDriver] Tring to create device with 32 bits Xlib: extension "GLX" missing on display ":0.0". [IrrDriver Temp Logger] Level 1: No GLX support available. OpenGL driver will not work. [IrrDriver Temp Logger] Level 2: Fatal error, could not get visual. Segmentation fault subsharm@subsharm-ThinkPad-T61:~$ sudo Xorg -configure [sudo] password for subsharm: Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again.

    Read the article

  • Cannot boot Ubuntu 12.04 on Lenovo V570 fresh install

    - by Jonathan
    I just did a fresh install of Kubuntu 64 bit 12.04 (from the DVD) to a Lenovo V570 laptop. I did a dual boot install with Windows 7. I used the boot layout leftover from Linux Mint 12 (which was working). The installation finished with what seemed to be a success. However, when rebooting, I get this error: GRUB: “invalid arch independent ELF magic” after install on SSD Then I did grub-install and update-grub (I didn't install grub-efi, so maybe this is the problem?) Then I got "unknown filesystem error" at the grub prompt. Desperately, I gave /boot the bootable flag. Then on a reboot,... I got some strange screen with the letters PXE. It flashed momentarily, too fast to read, and then the computer jumped to a screen which asked me from which drive was the boot desired to be. As of now, I have no idea what I am doing.

    Read the article

  • C Problem with Compiler?

    - by Solomon081
    I just started learning C, and wrote my hello world program: #include <stdio.h> main() { printf("Hello World"); return 0; } When I run the code, I get a really long error: Apple Mach-O Linker (id) Error Ld /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Products/Debug/CProj normal x86_64 cd /Users/Solomon/Desktop/C/CProj setenv MACOSX_DEPLOYMENT_TARGET 10.7 /Developer/usr/bin/clang -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.7.sdk -L/Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Products/Debug -F/Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Products/Debug -filelist /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Intermediates/CProj.build/Debug/CProj.build/Objects-normal/x86_64/CProj.LinkFileList -mmacosx-version-min=10.7 -o /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Products/Debug/CProj ld: duplicate symbol _main in /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Intermediates/CProj.build/Debug/CProj.build/Objects-normal/x86_64/helloworld.o and /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Intermediates/CProj.build/Debug/CProj.build/Objects-normal/x86_64/main.o for architecture x86_64 Command /Developer/usr/bin/clang failed with exit code 1 I am running xCode Should I reinstall DevTools?

    Read the article

  • Offline MySQL installation (Using deb file and no internet connection)

    - by Muhammad Gelbana
    I downloaded MySQL's installation package and ran the following command after installing a fresh Ubuntu server. dpkg -i mysql-5.5.28-debian6.0-x86_64.deb It installed fine and then I tried starting up the server manually /opt/mysql/server-5.5/bin/mysqld And the following error came up /opt/mysql/server-5.5/bin/mysqld: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory How can I install that library in an offline way ? I have no means whatsoever to an internet connection from that server and I can't even relocate it to have internet connection temporarily ! Thank you.

    Read the article

< Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >