Search Results

Search found 11444 results on 458 pages for 'protected internal'.

Page 102/458 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Dependencies not met on 12.04?

    - by Mochan
    Now I'm very aware that there are many questions out there that are quite similar to what I'm experiencing, but I have looked through many and I have not found a suitable answer. You are welcome to suggest questions that are similar, but I doubt that it will help. Getting on to the issue at hand, whenever I do anything that involves installation, whether it be codecs for videos, new programs or whatever the latter, I always get the 'Dependencies not met' error. In addition, I also get this notification in the panel: When clicked, the menu says this: "An error occurred. Please run Package Manager from the right-click menu or run apt-get in a terminal to see what is wrong. The error message was: ' Error: Broken Count 0'. This usually means your installed packages have unmet dependencies." It gives me three items to click: Show Updates Install all updates Check for Updates And then finally: Show Notifications (with a tick) Preferences When I try 'Install all Updates' (also Check Updates Install) it says this: and also this: As well as 'Ubuntu has experienced an internal error' and 'Did this error occur when moving from one version of Ubuntu to another?' (I clicked NO, because it didn't). So I took it's advice and ran sudo apt-get install -f This is what results: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libapt-pkg4.12:i386 The following packages will be upgraded: libapt-pkg4.12:i386 1 upgraded, 0 newly installed, 0 to remove and 87 not upgraded. 1 not fully installed or removed. Need to get 0 B/941 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y E: Internal Error, No file name for libapt-pkg4.12 When running sudo apt-get update it's all fine, but running sudo apt-get install -f still results in the same thing. I really have no idea what to do... can anyone help me?

    Read the article

  • Switching mdadm to an external bitmap

    - by Oli
    I've just read this in another post about improving RAID5/6 write speeds: After increasing stripe cache & switching to external bitmap, my speeds are 160 Mb/s writes, 260 Mb/s reads. :-D I've already found out how to increase the stripe cache and this worked pretty well but I'd like to know more about an external bitmap. I have an incredibly fast (540MB/s) RAID0 SSD that would do well if a bitmap does what I think it does but I'm still very unsure. I've only known about them as long as I've known this post. A few questions: What is a bitmap (in terms of mdadm)? What are the advantages of an internal bitmap (over external)? What are the advantages of an external bitmap (over internal)? How do I switch between the two? I should add that while this is a I'm-bored-let's-break-something thread, I do value the data stored on the RAID array. If doing this is going to put data at significant risk, please let me know.

    Read the article

  • Audio comes out of both headphone and speaker at the same time.. Ubuntu 12.04LTS [closed]

    - by pst007x
    I have the same issue on an Aspire. Ubuntu 12.04LTS 64bit realtek audio sound chip onboard If I plug in a headset, audio does not switch from internal speaker to headset, instead plays out of both at the same time. I have looked at the alsamixer setting, all on. I installed gnome-alsamixer, and I noticed headphone was ticked, if I untick the main audio mutes, and the headphone no longer works. Headset only works with internal speaker. Audio works fine on my other desktop and laptop running this release 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) salvatore@salvatore-Aspire-7730:~$ cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.24. salvatore@salvatore-Aspire-7730:~$ head -n 1 /proc/asound/card*/codec#* ==> /proc/asound/card0/codec#0 <== Codec: Realtek ALC888 ==> /proc/asound/card0/codec#1 <== Codec: LSI ID 1040 ==> /proc/asound/card0/codec#2 <== Codec: Intel Cantiga HDMI salvatore@salvatore-Aspire-7730:~$ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: ALC888 Analog [ALC888 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC888 Digital [ALC888 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 salvatore@salvatore-Aspire-7730:~$ uname -a Linux salvatore-Aspire-7730 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux salvatore@salvatore-Aspire-7730:~$ The alsa-base.conf does not exist Tried this: sudo apt-get remove --purge alsa-base sudo apt-get remove --purge pulseaudio sudo apt-get install alsa-base sudo apt-get install pulseaudio sudo alsa force-reload Then: sudo apt-get purge pulseaudio gstreamer0.10-pulseaudio sudo apt-get install pulseaudio gstreamer0.10-pulseaudio indicator-sound Tred this. sudo gedit Then open terminal: sudo /etc/modprobe.d/alsa-base.conf At the end of the file add a new line: options snd-hda-intel model=generic Save and then reboot But alsa-base.conf does not exist

    Read the article

  • libgdx loading textures fails [duplicate]

    - by Chris
    This question already has an answer here: Why do I get this file loading exception when trying to draw sprites with libgdx? 4 answers I'm trying to load my texture with playerTex = new Texture(Gdx.files.internal("player.jpg")); player.jpg is located under my-gdx-game-android/assets/data/player.jpg I get an exception like this: Full Code: @Override public void create() { camera = new OrthographicCamera(); camera.setToOrtho(false, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); batch = new SpriteBatch(); FileHandle file = Gdx.files.internal("player.jpg"); playerTex = new Texture(file); player = new Rectangle(); player.x = 800-20; player.y = 250; player.width = 20; player.height = 80; } @Override public void dispose() { // dispose of all the native resources playerTex.dispose(); batch.dispose(); } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(playerTex, player.x, player.y); batch.end(); if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 50 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 50 * Gdx.graphics.getDeltaTime(); }

    Read the article

  • Developing an iOS app for a single device - licensing issue

    - by bfavaretto
    I'm developing an iOS app for a museum as a freelancer. It's a very simple video player, to be installed on a single iPad that will be part of a permanent exhibition, basically acting as a kiosk. It turns out the iPad is the ideal device for that if you're looking for a small and affordable touchscreen. The problem is: as far as I can tell, none of the Apple Developer Program options available will allow me to distribute an app like that. The relevant options are (from the link above): iOS Developer Program ($99/year) Select this program if you would like to distribute apps on the App Store as an individual, sole proprietor, company, organization, government entity or educational institution. iOS Developer Enterprise Program ($299/year) Select this program if you would like to develop proprietary apps for internal distribution within your company, organization, government entity or educational institution. The regular program requires distribution through the App Store. The Enterprise version is for internal distribution within my own organization. Neither is the case here! It seems like I'm doomed to violate Apple's terms of service (and I can think of at least two ways of doing that: jailbreaking, or changing the iPad's date so it won't know the provisioning profile expired). Is that really so, or did I get the descriptions wrong? Has anyone here been in a similar situation?

    Read the article

  • How to write code that communicates with an accelerator in the real address space (real mode)?

    - by ysap
    This is a preliminary question for the issue, where I was asked to program a host-accelerator program on an embedded system we are building. The system is comprised of (among the standard peripherals) an ARM core and an accelerator processor. Both processors access the system bus via their bus interfaces, and share the same 32-bit global physical memory space. Both share access to the system's DRAM through the system bus. (The computer concept is similar to Beagleboard/raspberry Pie, but with a specialized accelerator added) The accelerator has its own internal memory (SRAM) which is exposed to the system and occupies a portion of the global address space (as opposed to how a graphics card would talk to teh CPU via a "small" aperture in the system memory space). On the ARM core (the host) we plan on running Ubuntu 12.04. The mode of operation of communicating between the processors should be that the host issues memory transactions on the system bus that are targeted at the accelerator internal memory. As far as my understanding goes, if I write a program for the host that simply writes to the physical address of the accelerator, most chances are that the program will crash due to a segmentation violation. So, I assume that I need some way of communicating with the device in real mode. What is the easiest way to achieve this mode of operation?

    Read the article

  • Ubuntu 12.04 LTX Install Problems (See post for system build details.)

    - by Lokitez
    This is my first ever attempt at working with Ubuntu. I have only ever installed Windows in the past and that may be the problem. I purchased all new hardware this week and I would really like to give Ubuntu a chance (especially since I don't want to buy another Windows license). First, the hardware: AMD FX-8150 Zambezi 3.6GHz Socket AM3+ 125W Eight-Core Desktop Processor ASUS Crosshair V Formula AM3+ AMD 990FX SATA 6Gb/s USB 3.0 ATX AMD Gaming Motherboard SAMSUNG 830 Series MZ-7PC128D/AM 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) - This is my intended boot drive. Western Digital VelociRaptor WD5000HHTZ 500GB 10000 RPM SATA 6.0Gb/s 3.5" Internal Hard Drive - This is a backup drive that I have installed Windows Vista on until I can get Ubuntu to work. G.SKILL Ripjaws X Series 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 I have downloaded and tried to install both Ubuntu 64 bit and Kubuntu 64 bit (both 12.04). Both will always fail to copy a file during install or otherwise lockup during install to the SSD. I have burned two copies of the Ubuntu 12.04 and had the install fail with both. I have installed Vista onto the HDD. Is it possible to mount the Ubuntu file into

    Read the article

  • Booting Ubuntu 12.04 from external eSATA disk

    - by Lord of Scripts
    This is my system topology: Disk #1 (SATA Internal) C: D: (Windows 7 Ultimate) Disk #2 (SATA Internal) E: (Windows Backup) Disk #3 (eSATA External) H: I: (Other windows data) /dev/sdc3 Linux Swap /dev/sdc4 Extended partition /dev/sdc5 Linux / So, I originally had there Ubuntu 8.1 from years ago but never got to use it. Now I used the Ubuntu 12.04 Live CD to install on that same location (That live CD takes a century to boot on a 6GB Intel i7 system...). The installation went fine, I selected it to install on /dev/sdc5 but it never asked me for any boot stuff, where I wanted to install Grub or whatever it is that it uses nowaways (I come from the LILO days when it always worked :-) So, yet again I can't access my new Linux installation. I have to wait a century to boot the "Live" CD and it allows me to see my new installation but I can't do anything with it. I tried the approach of this blog post. Copied the linux.bin of /dev/sdc5 into C: and used the BCDEdit steps to declare the new OS. So when I boot I see the Windows Boot menu and select Linux and after than I only get a black screen with a blinking cursor on the upper left. I can boot into Windows though. So, perhaps it didn't install the boot code on /dev/sdc5? I used this setup years ago booting from Windows with a BIN file: dd if=/dev/sdc5 of=/mnt/share/C/linux.bin bs=512 count=1 I am very reluctant to run GRUB because years ago I did and it wiped out my Windows boot sector and took quite some effort to recover it and be able to boot Windows again. I have been trying to install GRUB on a blank USB stick but I can't find anything clear enough. My system does NOT have a floppy. So can someone give me some ideas about how to get control of my Ubuntu 12.04 installation?

    Read the article

  • How to define implementation details?

    - by woni
    In our project, an assembly combines logic for the IoC-Container, the project internals and the communication layer. The current version evolved to have only internal classes in addin assemblies. My main problem with this approach is, that the entry point is only available over the IoC-Container. It is not possible to use anything else than reflection to initialize the assembly. Everything behind the IoC-Interface is defined as implementation detail and therefore not intended for usages outside. It is well known that you should not test implementation detail (such as private and internal methods), because they should be tested through the public interface. It is also well known, that your tests should not use the IoC-Container to setup the SUTs, because that would result in too much dependencies. So we are using the InternalsVisibleTo-Attribute to make internals visible to our test assemblies and test the so called implementation details. I recognized that one problem could be the mixup between different concerns in that assembly, changing this would make this discussion useless, because classes have to be defined public. Ignoring my concerns with this, isn't the need to test a class enough reason to make it public, the usages of InternalsVisibleTo seems unintended, and a little bit "hacky". The approach to test only against the publicly available IoC-Container is too costly and would result in integration style tests. The pros of using internals are, that the usages are well known and do not have to be implemented like a public method would have to be (documentation, completeness, versioning,...). Is there a solution, to not test against internals, but keep their advantages over public classes, or do we have to redefine what an implementation detail is.

    Read the article

  • Getting the total number of processors a computer has (c#)

    - by mbcrump
    Here is a code snippet for getting the total number of processors a computer has without using Environment.ProcessorCount. I found out that Environment.ProcessorCount is not necessary returning the correct value on some Intel based CPU’s.   using System; usingSystem.Collections.Generic; usingSystem.Linq; usingSystem.Text; usingSystem.Globalization; usingSystem.Runtime.InteropServices; namespaceConsoleApplication4 {     classProgram    {         static voidMain(string[] args)         {             int c = ProcessorCount;             Console.WriteLine("The computer has {0} processors", c);             Console.ReadLine();         }         private static classNativeMethods        {             [StructLayout(LayoutKind.Sequential)]             internal struct SYSTEM_INFO            {                 public ushort wProcessorArchitecture;                 public ushort wReserved;                 public uint dwPageSize;                 publicIntPtr lpMinimumApplicationAddress;                 publicIntPtr lpMaximumApplicationAddress;                 publicUIntPtr dwActiveProcessorMask;                 public uint dwNumberOfProcessors;                 public uint dwProcessorType;                 public uint dwAllocationGranularity;                 public ushort wProcessorLevel;                 public ushort wProcessorRevision;             }             [DllImport("kernel32.dll", CharSet = CharSet.Auto, ExactSpelling = true)]             internal static extern voidGetNativeSystemInfo(refSYSTEM_INFOlpSystemInfo);         }         public static int ProcessorCount         {             get            {                 NativeMethods.SYSTEM_INFOlpSystemInfo = newNativeMethods.SYSTEM_INFO();                 NativeMethods.GetNativeSystemInfo(reflpSystemInfo);                 return(int)lpSystemInfo.dwNumberOfProcessors;             }         }     } }

    Read the article

  • GRUB2 stuck at rescue console, showing "unknown filesystem" for all partitions

    - by AndiDog
    I installed Ubuntu 12.04 on my external USB drive, where I have a 700GB NTFS partition followed by the new 6GB ext4 partition and a swap partition (all primary). The GRUB MBR is also installed to the external hard disk. Since my BIOS puts the external drive as first disk when booting, I removed my internal hard disk before installation in order to avoid ordering problems. Now when I boot from the external drive, GRUB is stuck at the rescue console with the error "unknown filesystem". grub rescue> ls (hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) ls (hd0,<any of them>)/ gives me "unknown filesystem", thus also "insmod normal" GRUB doesn't seem to be able to read my Linux partition as you can see above?! How can I solve this? Additional info: bootinfoscript says (this is with the internal drive in again, but that does not make a difference): Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos2)/boot/grub on this drive. sdb1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdb2: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb3: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info:

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

  • How to correctly write an installation or setup document

    - by UmNyobe
    I just joined a small start-up as a software engineer after graduation. The start-up is 4 year old, and I am working with the CEO and the COO, even if there are some people abroad. Basically they both used to do almost everything. I am currently on some kind of training phase. I have at my disposition architecture, setup and installation internal documentation. Architecture documentation is like a bible and should contain complete information. The rest are used to give directions in different processes. The issue is that these documents are more or less dated, as they just didn't have the time to change them. I will be in charge of training the next hires, and updating these documents is part of my training. In some there is a lot of hard-coded information like: Install this_module_which_still_exists cd this_dir_name_changed cp this_file_name_changed other_dir_name_changed ./config_script.sh ./execute_script.sh The issues i have faced : Either the module installation is completely different (for instance now there is an rpm, or a different OS) Either names changed, and i need to switch old names by new names Description of the purpose of the current step missing. Information about a whole topic is missing Fortunately these guys are around and I get all the information I want and all the explanations I need. I want to bring a design to the next documents so in the future people don't feel like they are completely rewriting a document each time they are updating it. Do you have suggestions? If there is a lightweight design methodology available online you can point me to it's nice too. One thing I will do for sure is set up a versioning repository for the documents alone. There is already one for the source code so I don't know why internal documents deserve a different treatment.

    Read the article

  • Enable wireless greyed out. Disabled by hard switch. Details inside

    - by ltlunatic
    Ok so the basic issue here is this: My wireless has not once enabled through many environments. Wubi install: Enable wireless greyed out Live cd: greyed out Partition alongside win7: greyed out. I installed all software updates, checked additional drivers, installed the latest driver for RTL8111/8168B. rfkill list shows phy0 and phy1. Phy0 has a hardblock on. everything else is unblocked. Now consider this also: I have both a wireless adapter inside my laptop and a wireless usb adapter outside. The internal adapter does not work, hence why I think hardblock is on even if the external slider in 'on'. I have tried commands such as rfkill unblock all as well as sudo rfkill unblock all. No wireless options in the BIOS. I have tried another laptop and desktop and they get wireless from the get go. Ubuntu on my laptop seems to see my belkin adapter as well as my internal (ralink) one. But trying to scan networks yields no results and says network is down always. Ubuntu version: 12.04 lts Ask me anything else you may need to help me, thanks.

    Read the article

  • just can't get a controller to work

    - by Asaf
    I try to get into mysite/user so that application/classes/controller/user.php should be working, now this is my file tree: code of controller/user.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_User extends Controller_Default { public $template = 'user'; function action_index() { //$view = View::factory('user'); //$view->render(TRUE); $this->template->message = 'hello, world!'; } } ?> code of controller/default.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_default extends Controller_Template { } bootstrap.php: <?php defined('SYSPATH') or die('No direct script access.'); //-- Environment setup -------------------------------------------------------- /** * Set the default time zone. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/timezones */ date_default_timezone_set('America/Chicago'); /** * Set the default locale. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/setlocale */ setlocale(LC_ALL, 'en_US.utf-8'); /** * Enable the Kohana auto-loader. * * @see http://kohanaframework.org/guide/using.autoloading * @see http://php.net/spl_autoload_register */ spl_autoload_register(array('Kohana', 'auto_load')); /** * Enable the Kohana auto-loader for unserialization. * * @see http://php.net/spl_autoload_call * @see http://php.net/manual/var.configuration.php#unserialize-callback-func */ ini_set('unserialize_callback_func', 'spl_autoload_call'); //-- Configuration and initialization ----------------------------------------- /** * Initialize Kohana, setting the default options. * * The following options are available: * * - string base_url path, and optionally domain, of your application NULL * - string index_file name of your index file, usually "index.php" index.php * - string charset internal character set used for input and output utf-8 * - string cache_dir set the internal cache directory APPPATH/cache * - boolean errors enable or disable error handling TRUE * - boolean profile enable or disable internal profiling TRUE * - boolean caching enable or disable internal caching FALSE */ Kohana::init(array( 'base_url' => '/mysite/', 'index_file' => FALSE, )); /** * Attach the file write to logging. Multiple writers are supported. */ Kohana::$log->attach(new Kohana_Log_File(APPPATH.'logs')); /** * Attach a file reader to config. Multiple readers are supported. */ Kohana::$config->attach(new Kohana_Config_File); /** * Enable modules. Modules are referenced by a relative or absolute path. */ Kohana::modules(array( 'auth' => MODPATH.'auth', // Basic authentication 'cache' => MODPATH.'cache', // Caching with multiple backends 'codebench' => MODPATH.'codebench', // Benchmarking tool 'database' => MODPATH.'database', // Database access 'image' => MODPATH.'image', // Image manipulation 'orm' => MODPATH.'orm', // Object Relationship Mapping 'pagination' => MODPATH.'pagination', // Paging of results 'userguide' => MODPATH.'userguide', // User guide and API documentation )); /** * Set the routes. Each route must have a minimum of a name, a URI and a set of * defaults for the URI. */ Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); /** * Execute the main request. A source of the URI can be passed, eg: $_SERVER['PATH_INFO']. * If no source is specified, the URI will be automatically detected. */ echo Request::instance() ->execute() ->send_headers() ->response; ?> .htaccess: RewriteEngine On RewriteBase /mysite/ RewriteRule ^(application|modules|system) - [F,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] Trying to go to http://localhost/ makes the "hello world" page, from the welcome.php Trying to go to http://localhost/mysite/user give me this: The requested URL /mysite/user was not found on this server.

    Read the article

  • How to use pthread_atfork() and pthread_once() to reinitialize mutexes in child processes

    - by Blair Zajac
    We have a C++ shared library that uses ZeroC's Ice library for RPC and unless we shut down Ice's runtime, we've observed child processes hanging on random mutexes. The Ice runtime starts threads, has many internal mutexes and keeps open file descriptors to servers. Additionally, we have a few of mutexes of our own to protect our internal state. Our shared library is used by hundreds of internal applications so we don't have control over when the process calls fork(), so we need a way to safely shutdown Ice and lock our mutexes while the process forks. Reading the POSIX standard on pthread_atfork() on handling mutexes and internal state: Alternatively, some libraries might have been able to supply just a child routine that reinitializes the mutexes in the library and all associated states to some known value (for example, what it was when the image was originally executed). This approach is not possible, though, because implementations are allowed to fail *_init() and *_destroy() calls for mutexes and locks if the mutex or lock is still locked. In this case, the child routine is not able to reinitialize the mutexes and locks. On Linux, the this test C program returns EPERM from pthread_mutex_unlock() in the child pthread_atfork() handler. Linux requires adding _NP to the PTHREAD_MUTEX_ERRORCHECK macro for it to compile. This program is linked from this good thread. Given that it's technically not safe or legal to unlock or destroy a mutex in the child, I'm thinking it's better to have pointers to mutexes and then have the child make new pthread_mutex_t on the heap and leave the parent's mutexes alone, thereby having a small memory leak. The only issue is how to reinitialize the state of the library and I'm thinking of reseting a pthread_once_t. Maybe because POSIX has an initializer for pthread_once_t that it can be reset to its initial state. #include <pthread.h> #include <stdlib.h> #include <string.h> static pthread_once_t once_control = PTHREAD_ONCE_INIT; static pthread_mutex_t *mutex_ptr = 0; static void setup_new_mutex() { mutex_ptr = malloc(sizeof(*mutex_ptr)); pthread_mutex_init(mutex_ptr, 0); } static void prepare() { pthread_mutex_lock(mutex_ptr); } static void parent() { pthread_mutex_unlock(mutex_ptr); } static void child() { // Reset the once control. pthread_once_t once = PTHREAD_ONCE_INIT; memcpy(&once_control, &once, sizeof(once_control)); setup_new_mutex(); } static void init() { setup_new_mutex(); pthread_atfork(&prepare, &parent, &child); } int my_library_call(int arg) { pthread_once(&once_control, &init); pthread_mutex_lock(mutex_ptr); // Do something here that requires the lock. int result = 2*arg; pthread_mutex_unlock(mutex_ptr); return result; } In the above sample in the child() I only reset the pthread_once_t by making a copy of a fresh pthread_once_t initialized with PTHREAD_ONCE_INIT. A new pthread_mutex_t is only created when the library function is invoked in the child process. This is hacky but maybe the best way of dealing with this skirting the standards. If the pthread_once_t contains a mutex then the system must have a way of initializing it from its PTHREAD_ONCE_INIT state. If it contains a pointer to a mutex allocated on the heap than it'll be forced to allocate a new one and set the address in the pthread_once_t. I'm hoping it doesn't use the address of the pthread_once_t for anything special which would defeat this. Searching comp.programming.threads group for pthread_atfork() shows a lot of good discussion and how little the POSIX standards really provides to solve this problem. There's also the issue that one should only call async-signal-safe functions from pthread_atfork() handlers, and it appears the most important one is the child handler, where only a memcpy() is done. Does this work? Is there a better way of dealing with the requirements of our shared library?

    Read the article

  • Getting the constructor of an Interface Type through reflection, is there a better approach than loo

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Is there a better way than looping through all of my assembly types for implementations of my interface? I don't care about rewriting a piece of my code, I want to do it right on the first place so that I won't need to come back again and again and again. EDIT #1 Following Sam's advice, I will for now go with the IName and Name convention. However, is it me or there's some way to improve my code? Thanks! =)

    Read the article

  • java.lang.OutOfMemoryError: bitmap size exceeds VM budget

    - by Angel
    Hi, I am trying to change the layout of my application from portrait to landscape and vice-versa. But if i do it frequently or more than once then at times my application crashes.. Below is the error log. Please suggest what can be done? < 01-06 09:52:27.787: ERROR/dalvikvm-heap(17473): 1550532-byte external allocation too large for this process. 01-06 09:52:27.787: ERROR/dalvikvm(17473): Out of memory: Heap Size=6471KB, Allocated=4075KB, Bitmap Size=9564KB 01-06 09:52:27.787: ERROR/(17473): VM won't let us allocate 1550532 bytes 01-06 09:52:27.798: DEBUG/skia(17473): --- decoder-decode returned false 01-06 09:52:27.798: DEBUG/AndroidRuntime(17473): Shutting down VM 01-06 09:52:27.798: WARN/dalvikvm(17473): threadid=3: thread exiting with uncaught exception (group=0x4001e390) 01-06 09:52:27.807: ERROR/AndroidRuntime(17473): Uncaught handler: thread main exiting due to uncaught exception 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): java.lang.RuntimeException: Unable to start activity ComponentInfo{}: android.view.InflateException: Binary XML file line #2: Error inflating class 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2596) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2621) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.ActivityThread.handleRelaunchActivity(ActivityThread.java:3812) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.ActivityThread.access$2300(ActivityThread.java:126) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1936) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.os.Handler.dispatchMessage(Handler.java:99) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.os.Looper.loop(Looper.java:123) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.ActivityThread.main(ActivityThread.java:4595) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at java.lang.reflect.Method.invokeNative(Native Method) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at java.lang.reflect.Method.invoke(Method.java:521) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at dalvik.system.NativeStart.main(Native Method) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): Caused by: android.view.InflateException: Binary XML file line #2: Error inflating class 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.LayoutInflater.createView(LayoutInflater.java:513) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at com.android.internal.policy.impl.PhoneLayoutInflater.onCreateView(PhoneLayoutInflater.java:56) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:563) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.LayoutInflater.inflate(LayoutInflater.java:385) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.LayoutInflater.inflate(LayoutInflater.java:320) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.LayoutInflater.inflate(LayoutInflater.java:276) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:207) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.Activity.setContentView(Activity.java:1629) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at onCreate(Game.java:98) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2544) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): ... 12 more 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): Caused by: java.lang.reflect.InvocationTargetException 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.widget.LinearLayout.(LinearLayout.java:92) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at java.lang.reflect.Constructor.constructNative(Native Method) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at java.lang.reflect.Constructor.newInstance(Constructor.java:446) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.LayoutInflater.createView(LayoutInflater.java:500) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): ... 22 more 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): Caused by: java.lang.OutOfMemoryError: bitmap size exceeds VM budget 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.graphics.BitmapFactory.nativeDecodeAsset(Native Method) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:464) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.graphics.BitmapFactory.decodeResourceStream(BitmapFactory.java:340) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.graphics.drawable.Drawable.createFromResourceStream(Drawable.java:697) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.content.res.Resources.loadDrawable(Resources.java:1705) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.content.res.TypedArray.getDrawable(TypedArray.java:548) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.View.(View.java:1850) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.View.(View.java:1799) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): at android.view.ViewGroup.(ViewGroup.java:296) 01-06 09:52:27.857: ERROR/AndroidRuntime(17473): ... 26 more

    Read the article

  • Getting the constructor of an Interface Type through reflection?

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Might somebody guide me to the right path to follow in this situation?

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • From Binary to Data Structures

    - by Cédric Menzi
    Table of Contents Introduction PE file format and COFF header COFF file header BaseCoffReader Byte4ByteCoffReader UnsafeCoffReader ManagedCoffReader Conclusion History This article is also available on CodeProject Introduction Sometimes, you want to parse well-formed binary data and bring it into your objects to do some dirty stuff with it. In the Windows world most data structures are stored in special binary format. Either we call a WinApi function or we want to read from special files like images, spool files, executables or may be the previously announced Outlook Personal Folders File. Most specifications for these files can be found on the MSDN Libarary: Open Specification In my example, we are going to get the COFF (Common Object File Format) file header from a PE (Portable Executable). The exact specification can be found here: PECOFF PE file format and COFF header Before we start we need to know how this file is formatted. The following figure shows an overview of the Microsoft PE executable format. Source: Microsoft Our goal is to get the PE header. As we can see, the image starts with a MS-DOS 2.0 header with is not important for us. From the documentation we can read "...After the MS DOS stub, at the file offset specified at offset 0x3c, is a 4-byte...". With this information we know our reader has to jump to location 0x3c and read the offset to the signature. The signature is always 4 bytes that ensures that the image is a PE file. The signature is: PE\0\0. To prove this we first seek to the offset 0x3c, read if the file consist the signature. So we need to declare some constants, because we do not want magic numbers.   private const int PeSignatureOffsetLocation = 0x3c; private const int PeSignatureSize = 4; private const string PeSignatureContent = "PE";   Then a method for moving the reader to the correct location to read the offset of signature. With this method we always move the underlining Stream of the BinaryReader to the start location of the PE signature.   private void SeekToPeSignature(BinaryReader br) { // seek to the offset for the PE signagure br.BaseStream.Seek(PeSignatureOffsetLocation, SeekOrigin.Begin); // read the offset int offsetToPeSig = br.ReadInt32(); // seek to the start of the PE signature br.BaseStream.Seek(offsetToPeSig, SeekOrigin.Begin); }   Now, we can check if it is a valid PE image by reading of the next 4 byte contains the content PE.   private bool IsValidPeSignature(BinaryReader br) { // read 4 bytes to get the PE signature byte[] peSigBytes = br.ReadBytes(PeSignatureSize); // convert it to a string and trim \0 at the end of the content string peContent = Encoding.Default.GetString(peSigBytes).TrimEnd('\0'); // check if PE is in the content return peContent.Equals(PeSignatureContent); }   With this basic functionality we have a good base reader class to try the different methods of parsing the COFF file header. COFF file header The COFF header has the following structure: Offset Size Field 0 2 Machine 2 2 NumberOfSections 4 4 TimeDateStamp 8 4 PointerToSymbolTable 12 4 NumberOfSymbols 16 2 SizeOfOptionalHeader 18 2 Characteristics If we translate this table to code, we get something like this:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public MachineType Machine; public ushort NumberOfSections; public uint TimeDateStamp; public uint PointerToSymbolTable; public uint NumberOfSymbols; public ushort SizeOfOptionalHeader; public Characteristic Characteristics; } BaseCoffReader All readers do the same thing, so we go to the patterns library in our head and see that Strategy pattern or Template method pattern is sticked out in the bookshelf. I have decided to take the template method pattern in this case, because the Parse() should handle the IO for all implementations and the concrete parsing should done in its derived classes.   public CoffHeader Parse() { using (var br = new BinaryReader(File.Open(_fileName, FileMode.Open, FileAccess.Read, FileShare.Read))) { SeekToPeSignature(br); if (!IsValidPeSignature(br)) { throw new BadImageFormatException(); } return ParseInternal(br); } } protected abstract CoffHeader ParseInternal(BinaryReader br);   First we open the BinaryReader, seek to the PE signature then we check if it contains a valid PE signature and rest is done by the derived implementations. Byte4ByteCoffReader The first solution is using the BinaryReader. It is the general way to get the data. We only need to know which order, which data-type and its size. If we read byte for byte we could comment out the first line in the CoffHeader structure, because we have control about the order of the member assignment.   protected override CoffHeader ParseInternal(BinaryReader br) { CoffHeader coff = new CoffHeader(); coff.Machine = (MachineType)br.ReadInt16(); coff.NumberOfSections = (ushort)br.ReadInt16(); coff.TimeDateStamp = br.ReadUInt32(); coff.PointerToSymbolTable = br.ReadUInt32(); coff.NumberOfSymbols = br.ReadUInt32(); coff.SizeOfOptionalHeader = (ushort)br.ReadInt16(); coff.Characteristics = (Characteristic)br.ReadInt16(); return coff; }   If the structure is as short as the COFF header here and the specification will never changed, there is probably no reason to change the strategy. But if a data-type will be changed, a new member will be added or ordering of member will be changed the maintenance costs of this method are very high. UnsafeCoffReader Another way to bring the data into this structure is using a "magically" unsafe trick. As above, we know the layout and order of the data structure. Now, we need the StructLayout attribute, because we have to ensure that the .NET Runtime allocates the structure in the same order as it is specified in the source code. We also need to enable "Allow unsafe code (/unsafe)" in the project's build properties. Then we need to add the following constructor to the CoffHeader structure.   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { unsafe { fixed (byte* packet = &data[0]) { this = *(CoffHeader*)packet; } } } }   The "magic" trick is in the statement: this = *(CoffHeader*)packet;. What happens here? We have a fixed size of data somewhere in the memory and because a struct in C# is a value-type, the assignment operator = copies the whole data of the structure and not only the reference. To fill the structure with data, we need to pass the data as bytes into the CoffHeader structure. This can be achieved by reading the exact size of the structure from the PE file.   protected override CoffHeader ParseInternal(BinaryReader br) { return new CoffHeader(br.ReadBytes(Marshal.SizeOf(typeof(CoffHeader)))); }   This solution is the fastest way to parse the data and bring it into the structure, but it is unsafe and it could introduce some security and stability risks. ManagedCoffReader In this solution we are using the same approach of the structure assignment as above. But we need to replace the unsafe part in the constructor with the following managed part:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { IntPtr coffPtr = IntPtr.Zero; try { int size = Marshal.SizeOf(typeof(CoffHeader)); coffPtr = Marshal.AllocHGlobal(size); Marshal.Copy(data, 0, coffPtr, size); this = (CoffHeader)Marshal.PtrToStructure(coffPtr, typeof(CoffHeader)); } finally { Marshal.FreeHGlobal(coffPtr); } } }     Conclusion We saw that we can parse well-formed binary data to our data structures using different approaches. The first is probably the clearest way, because we know each member and its size and ordering and we have control about the reading the data for each member. But if add member or the structure is going change by some reason, we need to change the reader. The two other solutions use the approach of the structure assignment. In the unsafe implementation we need to compile the project with the /unsafe option. We increase the performance, but we get some security risks.

    Read the article

  • Using Unity – Part 1

    - by nmarun
    I have been going through implementing some IoC pattern using Unity and so I decided to share my learnings (I know that’s not an English word, but you get the point). Ok, so I have an ASP.net project named ProductWeb and a class library called ProductModel. In the model library, I have a class called Product: 1: public class Product 2: { 3: public string Name { get; set; } 4: public string Description { get; set; } 5:  6: public Product() 7: { 8: Name = "iPad"; 9: Description = "Not just a reader!"; 10: } 11:  12: public string WriteProductDetails() 13: { 14: return string.Format("Name: {0} Description: {1}", Name, Description); 15: } 16: } In the Page_Load event of the default.aspx, I’ll need something like: 1: Product product = new Product(); 2: productDetailsLabel.Text = product.WriteProductDetails(); Now, let’s go ‘Unity’fy this application. I assume you have all the bits for the pattern. If not, get it from here. I found this schematic representation of Unity pattern from the above link. This image might not make much sense to you now, but as we proceed, things will get better. The first step to implement the Inversion of Control pattern is to create interfaces that your types will implement. An IProduct interface is added to the ProductModel project. 1: public interface IProduct 2: { 3: string WriteProductDetails(); 4: } Let’s make our Product class to implement the IProduct interface. The application will compile and run as before despite the changes made. Add the following references to your web project: Microsoft.Practices.Unity Microsoft.Practices.Unity.Configuration Microsoft.Practices.Unity.StaticFactory Microsoft.Practices.ObjectBuilder2 We need to add a few lines to the web.config file. The line below tells what version of Unity pattern we’ll be using. 1: <configSections> 2: <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> 3: </configSections> Add another block with the same name as the section name declared above – ‘unity’. 1: <unity> 2: <typeAliases> 3: <!--Custom object types--> 4: <typeAlias alias="IProduct" type="ProductModel.IProduct, ProductModel"/> 5: <typeAlias alias="Product" type="ProductModel.Product, ProductModel"/> 6: </typeAliases> 7: <containers> 8: <container name="unityContainer"> 9: <types> 10: <type type="IProduct" mapTo="Product"/> 11: </types> 12: </container> 13: </containers> 14: </unity> From the Unity Configuration schematic shown above, you see that the ‘unity’ block has a ‘typeAliases’ and a ‘containers’ segment. The typeAlias element gives a ‘short-name’ for a type. This ‘short-name’ can be used to point to this type any where in the configuration file (web.config in our case, but all this information could be coming from an external xml file as well). The container element holds all the mapping information. This container is referenced through its name attribute in the code and you can have multiple of these container elements in the containers segment. The ‘type’ element in line 10 basically says: ‘When Unity requests to resolve the alias IProduct, return an instance of whatever the short-name of Product points to’. This is the most basic piece of Unity pattern and all of this is accomplished purely through configuration. So, in future you have a change in your model, all you need to do is - implement IProduct on the new model class and - either add a typeAlias for the new type and point the mapTo attribute to the new alias declared - or modify the mapTo attribute of the type element to point to the new alias (as the case may be). Now for the calling code. It’s a good idea to store your unity container details in the Application cache, as this is rarely bound to change and also adds for better performance. The Global.asax.cs file comes for our rescue: 1: protected void Application_Start(object sender, EventArgs e) 2: { 3: // create and populate a new Unity container from configuration 4: IUnityContainer unityContainer = new UnityContainer(); 5: UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); 6: section.Containers["unityContainer"].Configure(unityContainer); 7: Application["UnityContainer"] = unityContainer; 8: } 9:  10: protected void Application_End(object sender, EventArgs e) 11: { 12: Application["UnityContainer"] = null; 13: } All this says is: create an instance of UnityContainer() and read the ‘unity’ section from the configSections segment of the web.config file. Then get the container named ‘unityContainer’ and store it in the Application cache. In my code-behind file, I’ll make use of this UnityContainer to create an instance of the Product type. 1: public partial class _Default : Page 2: { 3: private IUnityContainer unityContainer; 4: protected void Page_Load(object sender, EventArgs e) 5: { 6: unityContainer = Application["UnityContainer"] as IUnityContainer; 7: if (unityContainer == null) 8: { 9: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 10: } 11: else 12: { 13: IProduct productInstance = unityContainer.Resolve<IProduct>(); 14: productDetailsLabel.Text = productInstance.WriteProductDetails(); 15: } 16: } 17: } Looking the ‘else’ block, I’m asking the unityContainer object to resolve the IProduct type. All this does, is to look at the matching type in the container, read its mapTo attribute value, get the full name from the alias and create an instance of the Product class. Fabulous!! I’ll go more in detail in the next blog. The code for this blog can be found here.

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part3

    - by ybbest
    In the second part of the series, I showed you how to display and close a custom page in a SharePoint modal dialog using JavaScript and display a message after the modal dialog is closed. In this post, I’d like to show you how to use SPLongOperation with the Modal dialog box. You can download the source code here. 1. Firstly, modify the element file as follow <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="ReportConcern" RegistrationType="ContentType" RegistrationId="0x010100866B1423D33DDA4CA1A4639B54DD4642" Location="EditControlBlock" Sequence="107" Title="Display Custom Page" Description="To Display Custom Page in a modal dialog box on this item"> <UrlAction Url="javascript: function emitStatus(messageToDisplay) { statusId = SP.UI.Status.addStatus(messageToDisplay.message + ' ' +messageToDisplay.location ); SP.UI.Status.setStatusPriColor(statusId, 'Green'); } function portalModalDialogClosedCallback(result, value) { if (value !== null) { emitStatus(value); } } var options = { url: '{SiteUrl}' + '/_layouts/YBBEST/TitleRename.aspx?List={ListId}&amp;ID={ItemId}', title: 'Rename title', allowMaximize: false, showClose: true, width: 500, height: 300, dialogReturnValueCallback: portalModalDialogClosedCallback }; SP.UI.ModalDialog.showModalDialog(options);" /> </CustomAction> </Elements> 2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked and display a status bar. Note that you need to use window.frameElement.commonModalDialogClose instead of window.frameElement.commonModalDialogClose protected void SubmitClicked(object sender, EventArgs e) { //Process stuff string message = "You clicked the Submit button"; string newLocation="http://www.google.com"; string information = string.Format("{{'message':'{0}','location':'{1}' }}", message, newLocation); var longOperation = new SPLongOperation(Page); longOperation.LeadingHTML = "Processing the  application"; longOperation.TrailingHTML = "Please wait while the application is being processed."; longOperation.Begin(); Thread.Sleep(5*1000); var closeDialogScript = GetCloseDialogScriptForLongProcess(information); longOperation.EndScript(closeDialogScript); } protected static string GetCloseDialogScriptForLongProcess(string message) { var scriptBuilder = new StringBuilder(); scriptBuilder.Append("window.frameElement.commonModalDialogClose(1,").Append(message).Append(");"); return scriptBuilder.ToString(); }   References: How to: Display a Page as a Modal Dialog Box

    Read the article

  • Using Unity – Part 2

    - by nmarun
    In the first part of this series, we created a simple project and learned how to implement IoC pattern using Unity. In this one, I’ll show how you can instantiate other types that implement our IProduct interface. One place where this one would want to use this feature is to create mock types for testing purposes. Alright, let’s dig in. I added another class – Product2.cs  to the ProductModel project. 1: public class Product2 : IProduct 2: { 3: public string Name { get; set;} 4: public Category Category { get; set; } 5: public DateTime MfgDate { get;set; } 6:  7: public Product2() 8: { 9: Name = "Canon Digital Rebel XTi"; 10: Category = new Category {Name = "Electronics", SubCategoryName = "Digital Cameras"}; 11: MfgDate = DateTime.Now; 12: } 13:  14: public string WriteProductDetails() 15: { 16: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}", 17: Name, Category, MfgDate.ToShortDateString()); 18: } 19: } Highlights of this class are that it implements IProduct interface and it has some different properties than the Product class. The Category class looks like below: 1: public class Category 2: { 3: public string Name { get; set; } 4: public string SubCategoryName { get; set; } 5:  6: public override string ToString() 7: { 8: return string.Format("{0} - {1}", Name, SubCategoryName); 9: } 10: } We’ll go to our web.config file to add the configuration information about this new class – Product2 that we created. Let’s first add a typeAlias element. 1: <typeAlias alias="Product2" type="ProductModel.Product2, ProductModel"/> That’s all that is needed for us to get an instance of Product2 in our application. I have a new button added to the .aspx page and the click event of this button is where all the magic happens: 1: private IUnityContainer unityContainer; 2: protected void Page_Load(object sender, EventArgs e) 3: { 4: unityContainer = Application["UnityContainer"] as IUnityContainer; 5: 6: if (unityContainer == null) 7: { 8: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 9: } 10: else 11: { 12: if (!IsPostBack) 13: { 14: IProduct productInstance = unityContainer.Resolve<IProduct>(); 15: productDetailsLabel.Text = productInstance.WriteProductDetails(); 16: } 17: } 18: } 19:  20: protected void Product2Button_Click(object sender, EventArgs e) 21: { 22: unityContainer.RegisterType<IProduct, Product2>(); 23: IProduct product2Instance = unityContainer.Resolve<IProduct>(); 24: productDetailsLabel.Text = product2Instance.WriteProductDetails(); 25: } The unityContainer instance is set in the Page_Load event. Line 22 in the click event of the Product2Button registers a type mapping in the container. In English, this means that when unityContainer tries to resolve for IProduct, it gets an instance of Product2. Once this code runs, following output is rendered: There’s another way of doing this. You can resolve an instance of the requested type with a name from the container. We’ll have to update the container element of our web.config file to include the following: 1: <container name="unityContainer"> 2: <types> 3: <type type="IProduct" mapTo="Product"/> 4: <!-- Named mapping for IProduct to Product --> 5: <type type="IProduct" mapTo="Product" name="LegacyProduct" /> 6: <!-- Named mapping for IProduct to Product2 --> 7: <type type="IProduct" mapTo="Product2" name="NewProduct" /> 8: </types> 9: </container> I’ve added a Dropdownlist and a button to the design page: 1: <asp:DropDownList ID="ModelTypesList" runat="server"> 2: <asp:ListItem Text="Legacy Product" Value="LegacyProduct" /> 3: <asp:ListItem Text="New Product" Value="NewProduct" /> 4: </asp:DropDownList> 5: <br /> 6: <asp:Button ID="SelectedModelButton" Text="Get Selected Instance" runat="server" 7: onclick="SelectedModelButton_Click" /> 1: protected void SelectedModelButton_Click(object sender, EventArgs e) 2: { 3: // get the selected value: LegacyProduct or NewProduct 4: string modelType = ModelTypesList.SelectedValue; 5: // pass the modelType to the Resolve method 6: IProduct customModel = unityContainer.Resolve<IProduct>(modelType); 7: productDetailsLabel.Text = customModel.WriteProductDetails(); 8: } Pretty straight forward right? The only thing to note here is that the values in the dropdownlist item need to match the name attribute of the type. Depending on what you select, you’ll get an instance of either the Product class or the Product2 class and the corresponding WriteProductDetails() method is called. Now you see, how either of these methods can be used to create mock objects your the test project. See the code here. I’ll continue to share more of Unity in the next blog.

    Read the article

  • using Generics in C# [closed]

    - by Uphaar Goyal
    I have started looking into using generics in C#. As an example what i have done is that I have an abstract class which implements generic methods. these generic methods take a sql query, a connection string and the Type T as parameters and then construct the data set, populate the object and return it back. This way each business object does not need to have a method to populate it with data or construct its data set. All we need to do is pass the type, the sql query and the connection string and these methods do the rest.I am providing the code sample here. I am just looking to discuss with people who might have a better solution to what i have done. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using MWTWorkUnitMgmtLib.Business; using System.Collections.ObjectModel; using System.Reflection; namespace MWTWorkUnitMgmtLib.TableGateway { public abstract class TableGateway { public TableGateway() { } protected abstract string GetConnection(); protected abstract string GetTableName(); public DataSet GetDataSetFromSql(string connectionString, string sql) { DataSet ds = null; using (SqlConnection connection = new SqlConnection(connectionString)) using (SqlCommand command = connection.CreateCommand()) { command.CommandText = sql; connection.Open(); using (ds = new DataSet()) using (SqlDataAdapter adapter = new SqlDataAdapter(command)) { adapter.Fill(ds); } } return ds; } public static bool ContainsColumnName(DataRow dr, string columnName) { return dr.Table.Columns.Contains(columnName); } public DataTable GetDataTable(string connString, string sql) { DataSet ds = GetDataSetFromSql(connString, sql); DataTable dt = null; if (ds != null) { if (ds.Tables.Count 0) { dt = ds.Tables[0]; } } return dt; } public T Construct(DataRow dr, T t) where T : class, new() { Type t1 = t.GetType(); PropertyInfo[] properties = t1.GetProperties(); foreach (PropertyInfo property in properties) { if (ContainsColumnName(dr, property.Name) && (dr[property.Name] != null)) property.SetValue(t, dr[property.Name], null); } return t; } public T GetByID(string connString, string sql, T t) where T : class, new() { DataTable dt = GetDataTable(connString, sql); DataRow dr = dt.Rows[0]; return Construct(dr, t); } public List GetAll(string connString, string sql, T t) where T : class, new() { List collection = new List(); DataTable dt = GetDataTable(connString, sql); foreach (DataRow dr in dt.Rows) collection.Add(Construct(dr, t)); return collection; } } }

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >