question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I want to run an Ansible action on the condition that a shell command doesn't return the expected output. ogr2ogr --formats pretty-prints a list of compatible file formats. I want to grep the formats output, and if my expected file formats aren't in the output, I want to run a command to install these components. Does anyone know how to do this? - name: check if proper ogr formats set up command: ogr2ogr --formats | grep $item with_items: - PostgreSQL - FileGDB - Spatialite register: ogr_check # If grep from ogr_check didn't find a certain format from with_items, run this - name: install proper ogr formats action: DO STUFF when: Not sure what to do here
First, please make sure you are using Ansible 1.3 or later. Ansible is still changing pretty quickly from what I can see, and a lot of awesome features and bug fixes are crucial. As for checking, you can try something like this, taking advantage of grep's exit code: - name: check if proper ogr formats set up shell: ogr2ogr --formats | grep $item with_items: - PostgreSQL - FileGDB - Spatialite register: ogr_check # grep will exit with 1 when no results found. # This causes the task not to halt play. ignore_errors: true - name: install proper ogr formats action: DO STUFF when: ogr_check|failed There are some other useful register variables, namely item.stdout_lines. If you'd like to see what's registered to the variable in detail, try the following task: - debug: msg={{ogr_check}} and then run the task in double verbose mode via ansible-playbook my-playbook.yml -vv. It will spit out a lot of useful dictionary values.
Ansible
19,942,269
27
I have python2.7 and python3.5 in my ansible server , while executing playbooks it is using python2.7. I wanted ansible to use python3.5 when executing playbooks. in order: 1 have set export path. 2 also changed default interpreter path in ansible.cfg as well. 3 have given specific interpretor path in hostsfile for particular host. But still, ansible is not running python3.
If you want to set the Python interpreter for individual hosts and groups, set the ansible_python_interpreter inventory variable. If however, you want to set the Python interpreter for global use, then set the interpreter_python key in the [defaults] section in the configuration file ansible.cfg. For a complete list of possible values for the two options above, please see: https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html Also see this example for usage of ansible_python_interpreter: https://docs.ansible.com/ansible/2.4/python_3_support.html , section "Testing Python 3 module support".
Ansible
59,380,824
26
I'm struggling to install Ansible Python package on my Windows 10 machine. I don't need Ansible to run on my machine, this is purely for development purpose on my Windows host. All commands will later be issued on a Linux machine. After running: pip install ansible I get the following exception: Command "c:\users\evaldas.buinauskas\appdata\local\programs\python\python37-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-install-hpay_le9\ansible\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-record-dvfgngpp\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\evaldas.buinauskas\AppData\Local\Temp\pip-install-hpay_le9\ansible\ Also there's a repetetive exception that I think is the root cause: error: can't copy 'lib\ansible\module_utils\ansible_release.py': doesn't exist or not a regular file This GitHub issue says that installing should be possible, not running it. That's basically all I really need. I tried running CMD/PowerShell/Cygwin as Administrator, didn't help. Also, there's an answer that tells how to install it on Windows: How to overcome - pip install ansible on windows failing with filename or extension too long on windows But I don't really understand how to get a wheel file for Ansible package.
Installing Ansible on Windows is cumbersome. My advice is not a direct solution on how to install Ansible on Windows, but rather a workaround. I use a Docker container with Ansible for developing playbooks on my Windows machine. You'd need Docker for Windows on your machine. Here's the Dockerfile: FROM alpine:3.7 ENV ANSIBLE_VERSION=2.5.4 ENV BUILD_PACKAGES \ bash \ curl \ tar \ nano \ openssh-client \ sshpass \ git \ python \ py-boto \ py-dateutil \ py-httplib2 \ py-jinja2 \ py-paramiko \ py-pip \ py-setuptools \ py-yaml \ ca-certificates RUN apk --update add --virtual build-dependencies \ gcc \ musl-dev \ libffi-dev \ openssl-dev \ python-dev && \ set -x && \ apk update && apk upgrade && \ apk add --no-cache ${BUILD_PACKAGES} && \ pip install --upgrade pip && \ pip install python-keyczar docker-py boto3 botocore && \ apk del build-dependencies && \ rm -rf /var/cache/apk/* && \ mkdir -p /etc/ansible/ /ansible && \ echo "[local]" >> /etc/ansible/hosts && \ echo "localhost" >> /etc/ansible/hosts && \ curl -fsSL https://releases.ansible.com/ansible/ansible-${ANSIBLE_VERSION}.tar.gz -o ansible.tar.gz && \ tar -xzf ansible.tar.gz -C /ansible --strip-components 1 && \ rm -fr ansible.tar.gz /ansible/docs /ansible/examples /ansible/packaging ENV ANSIBLE_GATHERING=smart \ ANSIBLE_HOST_KEY_CHECKING=false \ ANSIBLE_RETRY_FILES_ENABLED=false \ ANSIBLE_ROLES_PATH=/ansible/playbooks/roles \ ANSIBLE_SSH_PIPELINING=True \ PYTHONPATH=/ansible/lib \ PATH=/ansible/bin:$PATH \ ANSIBLE_LIBRARY=/ansible/library \ EDITOR=nano WORKDIR /ansible/playbooks ENTRYPOINT ["ansible-playbook"] Build the docker container with the docker build command. Afterwards you can create a small bash script that executes the docker run command and mounts your current directory into the container. You may call it ansible-playbook.sh: winpty docker run --rm -it -v /$(pwd):/ansible/playbooks <name of your container> $@ Now you will be able to launch Ansible playbook with ./ansible-playbook.sh <your playbook> in GIT BASH. If you'd like to run this in PowerShell you would probably need to remove the winpty command, but I did not test this in PS yet. It is not the finest solution but it gets the work done. Hope it helps you, too.
Ansible
51,167,099
26
I'm trying to filter properties of an object in jmespath based on the value of a subproperty and want to include only those properties where the subproperty is set to a specific value. Based on this example data: { "a": { "feature": { "enabled": true, } }, "b": { }, "c": { "feature": { "enabled": false } } } I'd like to get an object with all properties where the feature is enabled. { "a": { "feature": { "enabled": true, } } } I figured I could use this jmespath query to filter the objects where property. enabled is set to true. Unfortunateley, it doesn't seem to work and instead returns an empty array. *[?feature.enabled==`true`] *.feature.enabled or *[feature.enabled] return just the boolean values without any context. Even if *[?feature.enabled==true] would work, it would just be an array of the property values, but I need the keys (a and c) aswell. Is there any way to make this happen in jmespath? This is all part of an ansible playbook, so there would certainly be a way to achieve selection in a different way (Jinja2 templates or custom plugin) but I wanted to try jmespath and would reason, that it should be capable of such a task.
With dict2items filter in Ansible 2.5 and later, you can do it with: - debug: msg: "{{ dict(my_data | dict2items | json_query('[?value.feature.enabled].[key, value]')) }}" The result: "msg": { "a": { "feature": { "enabled": true } } }
Ansible
41,579,581
26
Is there a way to use the Ansible Python API to get a list of hosts from a given inventory file / group combination? For example, our inventory files are split up by service type: [dev:children] dev_a dev_b [dev_a] my.host.int.abc.com [dev_b] my.host.int.xyz.com [prod:children] prod_a prod_b [prod_a] my.host.abc.com [prod_b] my.host.xyz.com Can I use ansible.inventory in some way to pass in a specific inventory file, and the group I want to act on, and have it return a list of hosts that match?
Do the same trick from before, but instead of all, pass the group name you want to list: ansible (group name here) -i (inventory file here) --list-hosts
Ansible
37,623,849
26
In Ansible, in a role, I have vars files like this: vars/ app1.yml app2.yml Each file contains vars specific to an app/website like this: name: app1 git_repo: https://github.com/philgyford/app1.git # ... Ideally, without the task knowing in advance which apps have variable files, I'd like to end up with an array called apps like this: apps: - name: app1 git_repo: https://github.com/philgyford/app1.git # ... - name: app2 git_repo: https://github.com/philgyford/app2.git # ... ie, that combines the variables from the files into one. I know I can load all the variable files like this: - name: Load var files with_fileglob: - ../vars/*.yml include_vars: '{{ item }}' But given each file has identical variable names, it will overwrite each previous set of variables. I can't see a way to load the variables and put them into an apps array. I'm open to rearranging things slightly if it's the only way to make something like this possible.
You can not do that. Variables will always override variables with the same name. The only thing you could do with this exact setup is to write your own vars plugin which reads those files and merges them into an array. If you are open to change the structure of your apps definition you can use a hash and set your hash_behavior=merge. In each vars file then you'd have a definition like: apps: app1: git_repo: https://github.com/philgyford/app1.git apps: app2: git_repo: https://github.com/philgyford/app2.git When Ansible loads both files it will merge it automatically together into: apps: app1: git_repo: https://github.com/philgyford/app1.git app2: git_repo: https://github.com/philgyford/app2.git</pre> But be advised that hash_behavior=merge fundamentally changes the default behavior of Ansible on a global level. Make sure all your roles do not have issues with this setting. The documentation mentions: We generally recommend not using this setting unless you think you have an absolute need for it If you still use Ansible 1 you could use one of my old plugins instead: include_vars_merged. Basically this adds the behavior of hash_behavior=merge to only a single task. I have not yet looked into migrating this to Ansible 2 though and currently it looks like I won't have the need for it any longer.
Ansible
35,554,415
26
I have a json file in the same directory where my ansible script is. Following is the content of json file: { "resources":[ {"name":"package1", "downloadURL":"path-to-file1" }, {"name":"package2", "downloadURL": "path-to-file2"} ] } I am trying to to download these packages using get_url. Following is the approach: --- - hosts: localhost vars: package_dir: "/var/opt/" version_file: "{{lookup('file','/home/shasha/devOps/tests/packageFile.json')}}" tasks: - name: Printing the file. debug: msg="{{version_file}}" - name: Downloading the packages. get_url: url="{{item.downloadURL}}" dest="{{package_dir}}" mode=0777 with_items: version_file.resources The first task is printing the content of the file correctly but in the second task, I am getting the following error: [DEPRECATION WARNING]: Skipping task due to undefined attribute, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
You have to add a from_json jinja2 filter after the lookup: version_file: "{{ lookup('file','/home/shasha/devOps/tests/packageFile.json') | from_json }}"
Ansible
35,403,769
26
play_hosts is a list of all machines for a play. I want to take these and use something like format() to rewrite them like rabbitmq@%s and then join them together with something like join(). So: {{ play_hosts|format(???)|join(', ') }} All the examples of format use piping where the input is the format string and not a list. Is there a way to use these (or something else) to accomplish what I want? The output should looks something like: ['rabbitmq@server1', 'rabbitmq@server2', rabbitmq@server3', ...] The jinja2 doc describes format like this: format(value, *args, **kwargs) Apply python string formatting on an object: {{ "%s - %s"|format("Hello?", "Foo!") }} -> Hello? - Foo! So it gives three kinds of input but doesn't describe those inputs in the example, which shows one in the pipe and the other two passed in via args. Is there a keyword arg to specify the string that's piped? Please help, python monks!
In ansible you can use regex_replace filter: {{ play_hosts | map('regex_replace', '^(.*)$', 'rabbitmq@\\1') | list }}
Ansible
35,183,744
26
I need to create a single file with the contents of a single fact in Ansible. I'm currently doing something like this: - template: src=templates/git_commit.j2 dest=/path/to/REVISION My template file looks like this: {{ git_commit }} Obviously, it'd make a lot more sense to just do something like this: - inline_template: content={{ git_revision }} dest=/path/to/REVISION Puppet offers something similar. Is there a way to do this in Ansible?
Another option to the lineinfile module (as given by udondan's answer) would be to use the copy module and specify the content rather than a source local to the Ansible host. An example task would look something like: - name: Copy commit ref to file copy: content: "{{ git_commit }}" dest: /path/to/REVISION I personally prefer this to lineinfile as for me lineinfile should be for making slight changes to files that are already there where as copy is for making sure a file is in a place and looking exactly like you want it to. It also has the benefit of coping with multiple lines. In reality though I'd be tempted to make this a template task and just have a the template file be: "{{ git_commit }}" Which gets created by this task: - name: Copy commit ref to file template: src: path/to/template dest: /path/to/REVISION It's cleaner and it's using modules for exactly what they are meant for.
Ansible
33,768,690
26
I am trying to run an Ansible playbook against a server using an account other than the one I am logged on the control machine. I tried to specify an ansible_user in the inventory file according to the documentation on Inventory: [srv1] 192.168.1.146 ansible_connection=ssh ansible_user=user1 However Ansible called with ansible-playbook -i inventory playbook.yml -vvvv prints the following: GATHERING FACTS *************************************************************** <192.168.1.146> ESTABLISH CONNECTION FOR USER: techraf What worked for me was adding the remote_user argument to the playbook: - hosts: srv1 remote_user: user1 Now the same Ansible command connects as user1: GATHERING FACTS *************************************************************** <192.168.1.146> ESTABLISH CONNECTION FOR USER: user1 Also adding remote_user variable to ansible.cfg makes Ansible use the intended user instead of the logged-on one. Are the ansible_user in inventory file and remote_user in playbook/ansible.cfg for different purposes? What is the ansible_user used for? Or why doesn't Ansible observe the setting in the inventory?
You're likely running into a common issue: the published ansible docs are for the development version (2.0 right now), and we don't keep the old ones around. It's a big point of contention... Assuming you're using something pre-2.0, the inventory var name you need is ansible_ssh_user. ansible_user works in 2.0 (as does ansible_ssh_user- it gets aliased in).
Ansible
33,061,524
26
I would like to replace /etc/nginx/sites-enabled with a symlink to my repo. I'm trying to do this using file module, but that doesn't work as the file module doesn't remove a directory with force option. - name: setup nginx sites-available symlink file: path=/etc/nginx/sites-available src=/repo/etc/nginx/sites-available state=link force=yes notify: restart nginx I could fall back to using shell. - name: setup nginx sites-available symlink shell: test -d /etc/nginx/sites-available && rm -r /etc/nginx/sites-available && ln -sT /repo/etc/nginx/sites-available /etc/nginx/sites-available notify: restart nginx Is there any better way to achieve this instead of falling back to shell?
When you take your action, it's actually things: delete a folder add a symlink in its place This is probably also the cleanest way to represent in Ansible: tasks: - name: remove the folder file: path=/etc/nginx/sites-available state=absent - name: setup nginx sites-available symlink file: path=/etc/nginx/sites-available src=/repo/etc/nginx/sites-available state=link force=yes notify: restart nginx But, always removing and adding the symlink is not so nice, so adding a task to check the link target might be a nice addition: - name: check the current symlink stat: path=/etc/nginx/sites-available register: sites_available And a 'when' condition to the delete task: - name: remove the folder (only if it is a folder) file: path=/etc/nginx/sites-available state=absent when: sites_available.stat.isdir is defined and sites_available.stat.isdir
Ansible
27,006,925
26
I'm writing my k8s upgrade ansible playbook, and within that I need to do apt-mark unhold kubeadm. Now, I am trying to avoid using the ansible command or shell module to call apt if possible, but the apt hold/unhold command does not seem to be supported by neither package nor apt modules. Is it possible to do apt-mark hold in ansible without command or shell?
You can use the ansible.builtin.dpkg_selections module for this. Note that unhold translates to install in this context. - name: Hold kubeadm ansible.builtin.dpkg_selections: name: kubeadm selection: hold - name: Unhold kubeadm ansible.builtin.dpkg_selections: name: kubeadm selection: install
Ansible
63,982,903
25
I have one playbook and in this playbook, there are so many tasks. I need to know which task has taken how much time? Is there any solution?
Add callbacks_enabled = profile_tasks in the [defaults] section in your ansible.cfg. (Or callback_whitelist for Ansible < 2.11.) Here is my ansible.cfg [defaults] inventory = hosts callbacks_enabled = profile_tasks deprecation_warnings = False Here is my playbook - hosts: localhost gather_facts: true tasks: - name: Sleep for 10 Sec command: sleep 10 - name: Sleep for 5 Sec command: sleep 5 - name: Sleep for 2 Sec command: sleep 2 Here is my play output. PLAY [localhost] *************************************** TASK [Gathering Facts] ********************************* Thursday 28 May 2020 09:36:04 +0000 (0:00:00.038) 0:00:00.038 ********** ok: [localhost] TASK [Sleep for 10 Sec] ******************************** Thursday 28 May 2020 09:36:07 +0000 (0:00:03.695) 0:00:03.733 ********** changed: [localhost] TASK [Sleep for 5 Sec] ********************************* Thursday 28 May 2020 09:36:18 +0000 (0:00:11.166) 0:00:14.899 ********** changed: [localhost] TASK [Sleep for 2 Sec] ********************************* Thursday 28 May 2020 09:36:24 +0000 (0:00:05.965) 0:00:20.865 ********** changed: [localhost] PLAY RECAP ********************************************* localhost : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Thursday 28 May 2020 09:36:27 +0000 (0:00:02.878) 0:00:23.744 ********** =============================================================================== Sleep for 10 Sec------------------------------------------------ 11.17s Sleep for 5 Sec------------------------------------------------- 5.97s Gathering Facts------------------------------------------------- 3.70s Sleep for 2 Sec ------------------------------------------------- 2.88s Here at last it shows how much time each task took to complete the play. The explanation about the parameter callbacks_enabled = profile_tasks is found in the official ansible doc... https://docs.ansible.com/ansible/latest/plugins/callback.html#enabling-callback-plugins https://docs.ansible.com/ansible/latest/plugins/callback.html#plugin-list https://docs.ansible.com/ansible/latest/plugins/callback/profile_tasks.html
Ansible
61,948,417
25
I'm running a playbook which defined several packages to install via apt: - name: Install utility packages common to all hosts apt: name: "{{ item }}" state: present autoclean: yes with_items: - aptitude - jq - curl - git-core - at ... A recent ansible update on my system now renders this message concerning the playbook above: [DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying `name: {{ item }}`, please use `name: [u'aptitude', u'jq', u'curl', u'git-core', u'at', u'heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp', u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']` and remove the loop. If I'm understanding this correctly, Ansible now wants this list of packages as an array which leaves this: name: [u'aptitude', u'jq', u'curl', u'git-core', u'at','heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi'] Is there a better way? Just seems like I'll be scrolling right forever in VIM trying to maintain this. Either that, or word wrap it and deal with a word-cloud of packages.
You can code the array in YAML style to make it more readable: - name: Install utility packages common to all hosts apt: name: - aptitude - jq - curl - git-core - at state: present autoclean: yes
Ansible
52,743,147
25
That is to say: how to evaluate the password lookup only once? - name: Demo hosts: localhost gather_facts: False vars: my_pass: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}" tasks: - debug: msg: "{{ my_pass }}" - debug: msg: "{{ my_pass }}" - debug: msg: "{{ my_pass }}" each debug statement will print out a different value, e.g: PLAY [Demo] ************* TASK [debug] ************ ok: [localhost] => { "msg": "ZfyzacMsqZaYqwW" } TASK [debug] ************ ok: [localhost] => { "msg": "mKcfRedImqxgXnE" } TASK [debug] ************ ok: [localhost] => { "msg": "POpqMQoJWTiDpEW" } Using Ansible version 2.3.2.0
Use set_fact to assign permanent fact: - name: Demo hosts: localhost gather_facts: False vars: pwd_alias: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}" tasks: - set_fact: my_pass: "{{ pwd_alias }}" - debug: msg: "{{ my_pass }}" - debug: msg: "{{ my_pass }}" - debug: msg: "{{ my_pass }}"
Ansible
46,732,703
25
I have an ansible playbook to kill running processes and works great most of the time!, however, from time to time we find processes that just can't be killed so, "wait_for" gets to the timeout, throws an error and it stops the process. The current workaround is to manually go into the box, use "kill -9" and run the ansible playbook again so I was wondering if there is any way to handle this scenario from ansible itself?, I mean, I don't want to use kill -9 from the beginning but I maybe a way to handle the timeout?, even to use kill -9 only if process hasn't been killed in 300 seconds? but what would be the best way to do it? These are the tasks I currently have: - name: Get running processes shell: "ps -ef | grep -v grep | grep -w {{ PROCESS }} | awk '{print $2}'" register: running_processes - name: Kill running processes shell: "kill {{ item }}" with_items: "{{ running_processes.stdout_lines }}" - name: Waiting until all running processes are killed wait_for: path: "/proc/{{ item }}/status" state: absent with_items: "{{ running_processes.stdout_lines }}" Thanks!
You could ignore errors on wait_for and register the result to force kill failed items: - name: Get running processes shell: "ps -ef | grep -v grep | grep -w {{ PROCESS }} | awk '{print $2}'" register: running_processes - name: Kill running processes shell: "kill {{ item }}" with_items: "{{ running_processes.stdout_lines }}" - wait_for: path: "/proc/{{ item }}/status" state: absent with_items: "{{ running_processes.stdout_lines }}" ignore_errors: yes register: killed_processes - name: Force kill stuck processes shell: "kill -9 {{ item }}" with_items: "{{ killed_processes.results | select('failed') | map(attribute='item') | list }}"
Ansible
46,515,704
25
I find it hard to believe there isn't anything that covers this use case but my search has proved fruitless. I have a line in /etc/fstab to mount a drive that's no longer available: //archive/Pipeline /pipeline/Archives cifs ro,credentials=/home/username/.config/cifs 0 0 What I want is to change it to #//archive/Pipeline /pipeline/Archives cifs ro,credentials=/home/username/.config/cifs 0 0 I was using this --- - hosts: slurm remote_user: root tasks: - name: Comment out pipeline archive in fstab lineinfile: dest: /etc/fstab regexp: '^//archive/pipeline' line: '#//archive/pipeline' state: present tags: update-fstab expecting it to just insert the comment symbol (#), but instead it replaced the whole line and I ended up with #//archive/Pipeline is there a way to glob-capture the rest of the line or just insert the single comment char? regexp: '^//archive/pipeline *' line: '#//archive/pipeline *' or regexp: '^//archive/pipeline *' line: '#//archive/pipeline $1' I am trying to wrap my head around lineinfile and from what I"ve read it looks like insertafter is what I'm looking for, but "insert after" isn't what I want?
You can use the replace module for your case: --- - hosts: slurm remote_user: root tasks: - name: Comment out pipeline archive in fstab replace: dest: /etc/fstab regexp: '^//archive/pipeline' replace: '#//archive/pipeline' tags: update-fstab It will replace all occurrences of the string that matches regexp. lineinfile on the other hand, works only on one line (even if multiple matching are find in a file). It ensures a particular line is absent or present with a defined content.
Ansible
39,239,602
25
I am using Ansible to deploy a Django website into my servers (production, staging, etc), and I would like to get a notification (via slack in this case) if and only if any task fails. I can only figure out how to do it if a specified task fails (so I guess I could add a handler to all tasks), but intuition tells me there has to be an easier and more elegant option. Basically what I am thinking of is: --- - hosts: "{{hosts_to_deploy}}" - tasks: [...] - name: notify slack of deploy failure local_action: module: slack token: "{{slack_token}}" msg: "Deploy failed on {{inventory_hostname}}" when: # any task failed I have been diving into the Ansible documentation, specially in the error handling section, and answers here at SO, but I'm struggling to find an answer to my question. So any help will be much appreciated.
I don't think a handler is a solution, because a handler will only be notified if the task reports a changed state. On a failed state the handler will not be notified. Also, handlers by default will not be fired if the playbook failed. But that can be changed. For that you will need to set this in your ansible.cfg: force_handlers = True But yes, there are better options available. If you use Ansible 2 you can use the new blocks feature. Blocks group tasks together and have a rescue section which will be only triggered if any of the tasks have failed. tasks: - block: - here - go - all - your - tasks rescue: - name: notify slack of deploy failure local_action: module: slack token: "{{slack_token}}" msg: "Deploy failed on {{inventory_hostname}}" Another option and especially interesting if you're using Ansible 1.x might be callback plugins. As the name suggests with these kind of plugins you can write callbacks which can be fired on various events. Again, if you're using Ansible 2 you're lucky, because there already is a slack callback plugin available: https://github.com/ansible-collections/community.general/blob/main/plugins/callback/slack.py To use this plugin you need to enable it in your ansible.cfg: callback_whitelist = slack And define some environment variables on your system for configuration: This plugin makes use of the following environment variables: SLACK_WEBHOOK_URL (required): Slack Webhook URL SLACK_CHANNEL (optional): Slack room to post in. Default: #ansible SLACK_USERNAME (optional): Username to post as. Default: ansible SLACK_INVOCATION (optional): Show command line invocation details. Default: False That plugin might need some modifications to fit your needs. If that's the case copy the source and store it relative to your playbook as callback_plugins/custom_slack.py and then enable it in your ansible.cfg: callback_whitelist = custom_slack If you use Ansible 1.x you'll have to see how you can convert it. The API is different, examples for the old API can be found here: https://github.com/ansible/ansible/tree/v1.9.4-1/plugins/callbacks
Ansible
35,892,455
25
I see that files can supply variables to Ansible through the command line using --extra-vars "@some_file.json", or variables can be set in strings as key=value. Is it possible to do both? And if so, what's the syntax?
Specify both but separately. --extra-vars "@some_file.json" --extra-vars "key=value"
Ansible
34,959,691
25
What's the sane way run a task only if the host belongs to one or more groups? Currently, I'm using a boolean within the relevant group, e.g.: Inventory file [db_servers:vars] copy_connection_string=true Task - name: Copy db connection string file synchronize: # ... when: copy_connection_string is defined What's the right condition in the when clause to check whether the current host belongs to the db_servers group?
Run task when a host server is a member of a specific group Ansible contains special or magic variables - one of the most common is group_names which is a list (array) of all the groups the current host is in. - name: Copy db connection string file synchronize: # ... when: "'db_servers' in group_names" The above Ansible task will only run if the host is a member of the db_servers group.
Ansible
32,988,878
25
What is the best practice for running one role with different set of parameters? I need to run one application(docker container) multiple times on one server with different environment variables for each.
There's limitations in the Ansible docs when it comes to this kind of thing - if there's an official best practice, I haven't come across it. One good way that keeps your playbooks nice and readable is running several different plays against the host and calling the role with different parameters in each. The role: foo, var: blah syntax shown a little way into this description is a good way to pass parameters in, and keeps it clear at a glance what is going on. For example: - name: Run the docker role with docker_container_state=foo hosts: docker-host roles: - { role: docker_container, docker_container_state: foo } - name: Run the docker role with docker_container_state=bar hosts: docker-host roles: - { role: docker_container, docker_container_state: bar }
Ansible
32,802,956
25
What is the best way to get the main network interface name for a Linux server with Ansible? This is often/usually eth0 but we can't always assume this is the case and it would be better to identify this dynamically. We are configuring the firewall with Ansible so we need to be able to issue the interface name as part of the commands that we are using.
That should be {{ ansible_default_ipv4.interface }}. This is a system fact.
Ansible
32,189,385
25
I would like to build an output that shows the key and value of a variable. The following works perfectly ... # Format in Ansible msg="{{ php_command_result.results | map(attribute='item') | join(', ') }}" # Output {'value': {'svn_tag': '20150703r1_6.36_homeland'}, 'key': 'ui'}, {'value': {'svn_tag': '20150702r1_6.36_homeland'}, 'key': 'api'} What I would like is to show the key and svn_tag together as so: I'm able to display either the key or svn_tag but getting them to go together doesn't work. msg="{{ php_command_result.results | map(attribute='item.key') | join(', ') }}" # Output ui, api However, this is what I want. # Desired Output api - 20150702r1_6.36_homeland ui - 20150703r1_6.36_homeland
Using Jinja statements: - set_fact: php_command_result: results: [{"value":{"svn_tag":"20150703r1_6.36_homeland"},"key":"ui"},{"value":{"svn_tag":"20150702r1_6.36_homeland"},"key":"api"}] - debug: msg: "{% for result in php_command_result.results %}\ {{ result.key }} - {{ result.value.svn_tag }} | {% endfor %}" Outputs: ok: [localhost] => { "msg": "ui - 20150703r1_6.36_homeland | api - 20150702r1_6.36_homeland | " } If you want the results on separate lines: - debug: msg: "{% set output = [] %}\ {% for result in php_command_result.results %}\ {{ output.append( result.key ~ ' - ' ~ result.value.svn_tag) }}\ {% endfor %}\ {{ output }}" Outputs: ok: [localhost] => { "msg": [ "ui - 20150703r1_6.36_homeland", "api - 20150702r1_6.36_homeland" ] } Either of these can be put on one line if desired: - debug: msg: "{% for result in php_command_result.results %}{{ result.key }} - {{ result.value.svn_tag }} | {% endfor %}" - debug: msg: "{% set output = [] %}{% for result in php_command_result.results %}{{ output.append( result.key ~ ' - ' ~ result.value.svn_tag) }}{% endfor %}{{ output }}"
Ansible
31,685,125
25
I'm using Ansible with Jinja2 templates, and this is a scenario that I can't find a solution for in Ansible's documentation or googling around for Jinja2 examples. Here's the logic that I want to achieve in Ansible: if {{ existing_ansible_var }} == "string1" new_ansible_var = "a" else if {{ existing_ansible_var }} == "string2" new_ansible_var = "b" <...> else new_ansible_var = "" I could probably do this by combining several techniques, the variable assignment from here: Set variable in jinja, the conditional comparison here: http://jinja.pocoo.org/docs/dev/templates/#if-expression, and the defaulting filter here: https://docs.ansible.com/playbooks_filters.html#defaulting-undefined-variables , ...but I feel like that's overkill. Is there a simpler way to do this?
If you just want to output a value in your template depending on the value of existing_ansible_var you simply could use a dict and feed it with existing_ansible_var. {{ {"string1": "a", "string2": "b"}[existing_ansible_var] | default("") }} You can define a new variable the same way: {% set new_ansible_var = {"string1": "a", "string2": "b"}[existing_ansible_var] | default("") -%} In case existing_ansible_var might not necessarily be defined, you need to catch this with a default() which does not exist in your dict: {"string1": "a", "string2": "b"}[existing_ansible_var | default("this key does not exist in the dict")] | default("") You as well can define it in the playbook and later then use new_ansible_var in the template: vars: myDict: string1: a string2: b new_ansible_var: '{{myDict[existing_ansible_var | default("this key does not exist in the dict")] | default("") }}'
Ansible
30,637,054
25
I have a C++ program hosted in Bitbucket git repository that I'm compiling with CMake. The current play can be seen below. It works fine except build-task is run every time the play is run. Instead I'd like build-task to run only when new software version is pulled by git-module. How I can tell in build-task if clone-task found new version ? --- # tasks of role: foo - name: clone repository git: repo=git@bitbucket.org:foo/foo.git dest={{ foo.dir }} accept_hostkey=yes - name: create build dir file: state=directory path={{ foo.build_dir }} - name: build command: "{{ item }} chdir={{ foo.build_dir }}" with_items: - cmake .. - make
You can register variable with output of clone task and invoke build task when state of clone task is changed For example: --- # tasks of role: foo - name: clone repository git: repo=git@bitbucket.org:foo/foo.git dest={{ foo.dir }} accept_hostkey=yes register: gitclone - name: create build dir file: state=directory path={{ foo.build_dir }} - name: build command: "{{ item }} chdir={{ foo.build_dir }}" with_items: - cmake .. - make when: gitclone.changed
Ansible
23,014,713
25
When I try to install ansible role, I see this exception. $ ansible-galaxy install zzet.postgresql Traceback (most recent call last): File "/Users/myHomeDir/.homebrew/Cellar/ansible/1.4.3/libexec/bin/ansible-galaxy", line 34, in <module> import yaml ImportError: No module named yaml OS: Mac Os Maverick Ansible: 1.4.3 Does anyone know how to fix it?
Based on the error message, it tries to import the python module yaml but cannot find it. The yaml module is called pyyaml when you install it with pip: pip install pyyaml If pip is not installed on your mac then you can install it as, easy_install pip
Ansible
20,966,921
25
OK, strange question. I have SSH forwarding working with Vagrant. But I'm trying to get it working when using Ansible as a Vagrant provisioner. I found out exactly what Ansible is executing, and tried it myself from the command line, sure enough, it fails there too. [/common/picsolve-ansible/u12.04%]ssh -o HostName=127.0.0.1 \ -o User=vagrant -o Port=2222 -o UserKnownHostsFile=/dev/null \ -o StrictHostKeyChecking=no -o PasswordAuthentication=no \ -o IdentityFile=/Users/bryanhunt/.vagrant.d/insecure_private_key \ -o IdentitiesOnly=yes -o LogLevel=FATAL \ -o ForwardAgent=yes "/bin/sh \ -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' " Permission denied (publickey,password). But when I just run vagrant ssh the agent forwarding works correctly, and I can checkout R/W my github project. [/common/picsolve-ansible/u12.04%]vagrant ssh vagrant@vagrant-ubuntu-precise-64:~$ /bin/sh -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' Cloning into '/home/vagrant/poc_docker'... remote: Counting objects: 18, done. remote: Compressing objects: 100% (14/14), done. remote: Total 18 (delta 4), reused 0 (delta 0) Receiving objects: 100% (18/18), done. Resolving deltas: 100% (4/4), done. vagrant@vagrant-ubuntu-precise-64:~$ Has anyone got any idea how it is working? Update: By means of ps awux I determined the exact command being executed by Vagrant. I replicated it and git checkout worked. ssh vagrant@127.0.0.1 -p 2222 \ -o Compression=yes \ -o StrictHostKeyChecking=no \ -o LogLevel=FATAL \ -o StrictHostKeyChecking=no \ -o UserKnownHostsFile=/dev/null \ -o IdentitiesOnly=yes \ -i /Users/bryanhunt/.vagrant.d/insecure_private_key \ -o ForwardAgent=yes \ -o LogLevel=DEBUG \ "/bin/sh -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' "
As of ansible 1.5 (devel aa2d6e47f0) last updated 2014/03/24 14:23:18 (GMT +100) and Vagrant 1.5.1 this now works. My Vagrant configuration contains the following: config.vm.provision "ansible" do |ansible| ansible.playbook = "../playbooks/basho_bench.yml" ansible.sudo = true ansible.host_key_checking = false ansible.verbose = 'vvvv' ansible.extra_vars = { ansible_ssh_user: 'vagrant', ansible_connection: 'ssh', ansible_ssh_args: '-o ForwardAgent=yes'} It is also a good idea to explicitly disable sudo use. For example, when using the Ansible git module, I do this: - name: checkout basho_bench repository sudo: no action: git repo=git@github.com:basho/basho_bench.git dest=basho_bench
Ansible
20,952,689
25
I want to use a public aws keypair .pem file for running ansible playbooks. I want to do this without changing my ~/.ssh/id_rsa.pub and I can't create a new keypair from my current ~/.ssh/id_rsa.pub and apply it to the ec2 instances I am trying to change. $ ansible --version ansible 1.9.6 configured module search path = None Here is my hosts file (note that my actual ip is replaced with 1.2.3.4). This is probably the issue since I need a way to set a public key variable and use that: [all_servers:vars] ansible_ssh_private_key_file = ./mykeypair.pem [dashboard] 1.2.3.4 dashboard_domain=my.domain.info Here is my playbook: --- - hosts: dashboard gather_facts: False remote_user: ubuntu tasks: - name: ping ping: This is the command I am using to run it: ansible-playbook -i ./hosts test.yml It results in the following error: fatal: [1.2.3.4] => SSH Error: Permission denied (publickey). while connecting to 1.2.3.4:22 There is no problem with my keypair: $ ssh -i mykeypair.pem ubuntu@1.2.3.4 'whoami' ubuntu What am I doing wrong?
Ok little mistakes I guess you can't have spaces in host file variables and need to define the group you are applying the vars to. This hosts file works with it all: [dashboard:vars] ansible_ssh_private_key_file=./mykeypair.pem [dashboard] 1.2.3.4 dashboard_domain=my.domain.info
Ansible
42,123,317
24
I have to parse some files and convert them to some predefined datatypes. Haskell seems to be providing two packages for that: attoparsec parsec What is the difference between the two of them and which one is better suited for parsing a text file according to some rules?
Parsec Parsec is good for "user-facing" parsers: things where you have a bounded amount of input but error messages matter. It's not terribly fast, but if you have small inputs this shouldn't matter. For example, I would choose Parsec for virtually any programming language tools since--in absolute terms--even the largest source files are not that big but error messages really matter. Parsec can work on different input types, which means you can use it with a standard String or with a stream of tokens from an external lexer of some sort. Since it can use String, it handles Unicode perfectly well for you; the built-in basic parsers like digit and letter are Unicode-aware. Parsec also comes with a monad transformer, which means you can layer it in a monad stack. This could be useful if you want to keep track of additional state during your parse, for example. You could also go for more trippy effects like non-deterministic parsing, or something--the usual magic of monad transformers. Attoparsec Attoparsec is much faster than Parsec. You should use it when you expect to get large amounts of input or performance really matters. It's great for things like networking code (parsing packet structure), parsing large amounts of raw data or working with binary file formats. Attoparsec can work with ByteStrings, which are binary data. This makes it a good choice for implementing things like binary file formats. However, the since this is for binary data, it does not handle things like text encoding; for that, you should use the attoparsec module for Text. Attoparsec supports incremental parsing, which Parsec does not. This is very important for certain applications like networking code, but doesn't matter for others. Attorparsec has worse error messages than Parsec and sacrifices some high-level features for performance. It's specialized to Text or ByteString, so you can't use it with tokens from a custom lexer. It also isn't a monad transformer. Which One? Ultimately, Parsec and Attoparsec cater to very different niches. The high-level difference is performance: if you need it, choose Attoparsec; if you don't, just go with Parsec. My usual heuristic is choosing Parsec for programming languages, configuration file formats and user input as well as almost anything I would otherwise do with a regex. These are things usually produced by hand, so the parsers do not need to scale but they do need to report errors well. On the other hand, I would choose Attoparsec for things like implementing network protocols, dealing with binary data and file formats or reading in large amounts of automatically generated data. Things where you're dealing with time constraints or large amounts of data, that are usually not directly written by a human. As you see, the choice is actually often pretty simple: the use cases don't overlap very much. Chances are, it'll be pretty clear which one to use for any given application.
Parsec
19,208,231
82
There seems to be a consensus that you should use Parsec as an applicative rather than a monad. What are the benefits of applicative parsing over monadic parsing? style performance abstraction Is monadic parsing out?
The main difference between monadic and applicative parsing is in how sequential composition is handled. In the case of an applicative parser, we use (<*>), whereas with a monad we use (>>=). (<*>) :: Parser (a -> b) -> Parser a -> Parser b (>>=) :: Parser a -> (a -> Parser b) -> Parser b The monadic approach is more flexible, because it allows the grammar of the second part to depend on the result from the first one, but we rarely need this extra flexibility in practice. You might think that having some extra flexibility can't hurt, but in reality it can. It prevents us from doing useful static analysis on a parser without running it. For example, let's say we want to know whether a parser can match the empty string or not, and what the possible first characters can be in a match. We want functions empty :: Parser a -> Bool first :: Parser a -> Set Char With an applicative parser, we can easily answer this question. (I'm cheating a little here. Imagine we have a data constructors corresponding to (<*>) and (>>=) in our candidate parser "languages"). empty (f <*> x) = empty f && empty x first (f <*> x) | empty f = first f `union` first x | otherwise = first f However, with a monadic parser we don't know what the grammar of the second part is without knowing the input. empty (x >>= f) = empty x && empty (f ???) first (x >>= f) | empty x = first x `union` first (f ???) | otherwise = first x By allowing more, we're able to reason less. This is similar to the choice between dynamic and static type systems. But what is the point of this? What might we use this extra static knowledge for? Well, we can for example use it to avoid backtracking in LL(1) parsing by comparing the next character to the first set of each alternative. We can also determine statically whether this would be ambiguous by checking if the first sets of two alternatives overlap. Another example is that it can be used for error recovery, as shown in the paper Deterministic, Error-Correcting Combinator Parsers by S. Doaitse Swierstra and Luc Duponcheel. Usually, however, the choice between applicative and monadic parsing has already been made by the authors of the parsing library you're using. When a library such as Parsec exposes both interfaces, the choice of which one to use is purely a stylistic one. In some cases applicative code is easier to read than monadic code and sometimes it's the other way round.
Parsec
7,861,903
76
When writing a parser in a parser combinator library like Haskell's Parsec, you usually have 2 choices: Write a lexer to split your String input into tokens, then perform parsing on [Token] Directly write parser combinators on String The first method often seems to make sense given that many parsing inputs can be understood as tokens separated by whitespace. In other places, I have seen people recommend against tokenizing (or scanning or lexing, how some call it), with simplicity being quoted as the main reason. What are general trade-offs between lexing and not doing it?
The most important difference is that lexing will translate your input domain. A nice result of this is that You do not have to think about whitespace anymore. In a direct (non-lexing) parser, you have to sprinkle space parsers in all places where whitespace is allowed to be, which is easy to forget and it clutters your code if whitespace must separate all your tokens anyway. You can think about your input in a piece-by-piece manner, which is easy for humans. However, if you do perform lexing, you get the problems that You cannot use common parsers on String anymore - e.g. for parsing a number with a library Function parseFloat :: Parsec String s Float (that operates on a String input stream), you have to do something like takeNextToken :: TokenParser String and execute the parseFloat parser on it, inspecting the parse result (usually Either ErrorMessage a). This is messy to write and limits composability. You have adjust all error messages. If your parser on tokens fails at the 20th token, where in the input string is that? You'll have to manually map error locations back to the input string, which is tedious (in Parsec this means to adjust all SourcePos values). Error reporting is generally worse. Running string "hello" *> space *> float on wrong input like "hello4" will tell you precisely that there is expected whitespace missing after the hello, while a lexer will just claim to have found an "invalid token". Many things that one would expect to be atomic units and to be separated by a lexer are actually pretty "too hard" for a lexer to identify. Take for example String literals - suddenly "hello world" are not two tokens "hello and world" anymore (but only, of course, if quotes are not escaped, like \") - while this is very natural for a parser, it means complicated rules and special cases for a lexer. You cannot re-use parsers on tokens as nicely. If you define how to parse a double out of a String, export it and the rest of the world can use it; they cannot run your (specialized) tokenizer first. You are stuck with it. When you are developing the language to parse, using a lexer might lead you into making early decisions, fixing things that you might want to change afterwards. For example, imagine you defined a language that contains some Float token. At some point, you want to introduce negative literals (-3.4 and - 3.4) - this might not be possible due to the lexer interpreting whitespace as token separator. Using a parser-only approach, you can stay more flexible, making changes to your language easier. This is not really surprising since a parser is a more complex tool that inherently encodes rules. To summarize, I would recommend writing lexer-free parsers for most cases. In the end, a lexer is just a "dumbed-down"* parser - if you need a parser anyway, combine them into one. * From computing theory, we know that all regular languages are also context-free languages; lexers are usually regular, parsers context-free or even context-sensitve (monadic parsers like Parsec can express context-sensitiveness).
Parsec
15,216,202
49
Text Text.Parsec Text.Parsec.ByteString Text.Parsec.ByteString.Lazy Text.Parsec.Char Text.Parsec.Combinator Text.Parsec.Error Text.Parsec.Expr Text.Parsec.Language Text.Parsec.Perm Text.Parsec.Pos Text.Parsec.Prim Text.Parsec.String Text.Parsec.Token ParserCombinators Text.ParserCombinators.Parsec Text.ParserCombinators.Parsec.Char Text.ParserCombinators.Parsec.Combinator Text.ParserCombinators.Parsec.Error Text.ParserCombinators.Parsec.Expr Text.ParserCombinators.Parsec.Language Text.ParserCombinators.Parsec.Perm Text.ParserCombinators.Parsec.Pos Text.ParserCombinators.Parsec.Prim Text.ParserCombinators.Parsec.Token Are they the same?
At the moment there are two widely used major versions of Parsec: Parsec 2 and Parsec 3. My advice is simply to use the latest release of Parsec 3. But if you want to make a conscientious choice, read on. New in Parsec 3 Monad Transformer Parsec 3 introduces a monad transformer, ParsecT, which can be used to combine parsing with other monadic effects. Streams Although Parsec 2 lets you to choose the token type (which is useful when you want to separate lexical analysis from the parsing), the tokens are always arranged into lists. List may be not the most efficient data structure in which to store large texts. Parsec 3 can work with arbitrary streams -- data structures with a list-like interface. You can define your own streams, but Parsec 3 also includes a popular and efficient Stream implementation based on ByteString (for Char-based parsing), exposed through the modules Text.Parsec.ByteString and Text.Parsec.ByteString.Lazy. Reasons to prefer Parsec 2 Fewer extensions required Advanced features provided by Parsec 3 do not come for free; to implement them several language extensions are required. Neither of the two versions is Haskell-2010 (i.e. both use extensions), but Parsec 2 uses fewer extensions than Parsec 3, so chances that any given compiler can compile Parsec 2 are higher than those for Parsec 3. By this time both versions work with GHC, while Parsec 2 is also reported to build with JHC and is included as one of the JHC's standard libraries. Performance Originally (i.e. as of 3.0 version) Parsec 3 was considerably slower than Parsec 2. However, work on improving Parsec 3 performance has been done, and as of version 3.1 Parsec 3 is only slightly slower than Parsec 2 (benchmarks: 1, 2). Compatibility layer It has been possible to "reimplement" all of the Parsec 2 API in Parsec 3. This compatibility layer is provided by the Parsec 3 package under the module hierarchy Text.ParserCombinators.Parsec (the same hierarchy which is used by Parsec 2), while the new Parsec 3 API is available under the Text.Parsec hierarchy. This means that you can use Parsec 3 as a drop-in replacement for Parsec 2.
Parsec
6,029,371
44
Using Parsec 3.1, it is possible to parse several types of inputs: [Char] with Text.Parsec.String Data.ByteString with Text.Parsec.ByteString Data.ByteString.Lazy with Text.Parsec.ByteString.Lazy I don't see anything for the Data.Text module. I want to parse Unicode content without suffering from the String inefficiencies. So I've created the following module based on the Text.Parsec.ByteString module: {-# LANGUAGE FlexibleInstances, MultiParamTypeClasses #-} {-# OPTIONS_GHC -fno-warn-orphans #-} module Text.Parsec.Text ( Parser, GenParser ) where import Text.Parsec.Prim import qualified Data.Text as T instance (Monad m) => Stream T.Text m Char where uncons = return . T.uncons type Parser = Parsec T.Text () type GenParser t st = Parsec T.Text st Does it make sense to do so? It this compatible with the rest of the Parsec API? Additional comments: I had to add {-# LANGUAGE NoMonomorphismRestriction #-} pragma in my parse modules to make it work. Parsing Text is one thing, building an AST with Text is another thing. I will also need to pack my String before return: module TestText where import Data.Text as T import Text.Parsec import Text.Parsec.Prim import Text.Parsec.Text input = T.pack "xxxxxxxxxxxxxxyyyyxxxxxxxxxp" parser = do x1 <- many1 (char 'x') y <- many1 (char 'y') x2 <- many1 (char 'x') return (T.pack x1, T.pack y, T.pack x2) test = runParser parser () "test" input
Since Parsec 3.1.2 support of Data.Text is built-in! See http://hackage.haskell.org/package/parsec-3.1.2 If you are stuck with older version, the code snippets in other answers are helpful, too.
Parsec
4,064,532
37
Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) = function | P(f, xs) -> Expr(loop (' ', f, xs)) | xs -> invalidArg "Expr" (sprintf "%A" xs) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' | _ -> (/) |> fun op -> loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|_|) = function | '('::Expr(f, ')'::xs) -> Some(P(f, xs)) | c::_ as xs when '0' <= c && c <= '9' -> let rec loop n = function | c2::xs when '0' <= c2 && c2 <= '9' -> loop (10*n + int(string c2)) xs | xs -> Some(P(n, xs)) loop 0 xs | _ -> None My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation? EDIT: Here's the OCaml script I used to generate a ~2Mb expression for benchmarking: open Printf let rec f ff n = if n=0 then fprintf ff "1" else fprintf ff "%a+%a*(%a-%a)" f (n-1) f (n-1) f (n-1) f (n-1) let () = let n = try int_of_string Sys.argv.(1) with _ -> 3 in fprintf stdout "%a\n" f n
I've come up with a Haskell solution that is 30× faster than the Haskell solution you posted (with my concocted test expression). Major changes: Change Parsec/String to Attoparsec/ByteString In the fact function, change read & many1 digit to decimal Made the chainl1 recursion strict (remove $! for the lazier version). I tried to keep everything else you had as similar as possible. import Control.Applicative import Data.Attoparsec import Data.Attoparsec.Char8 import qualified Data.ByteString.Char8 as B expr :: Parser Int expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term :: Parser Int term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact :: Parser Int fact = decimal <|> char '(' *> expr <* char ')' eval :: B.ByteString -> Int eval = either (error . show) id . eitherResult . parse expr . B.filter (/= ' ') chainl1 :: (Monad f, Alternative f) => f a -> f (a -> a -> a) -> f a chainl1 p op = p >>= rest where rest x = do f <- op y <- p rest $! (f x y) <|> pure x main :: IO () main = B.readFile "expr" >>= (print . eval) I guess what I concluded from this is that the majority of the slowdown for the parser combinator was that it was sitting on an inefficient base, not that it was a parser combinator, per se. I imagine with more time and profiling this could go faster, as I stopped when I went past the 25× mark. I don't know if this would be faster than the precedence climbing parser ported to Haskell. Maybe that would be an interesting test?
Parsec
4,559,399
37
To learn how to write and parse a context-free grammar I want to choose a tool. For Haskell, there are two big options: Happy, which generates a parser from a grammar description and *Parsec, which allows you to directly code a parser in Haskell. What are the (dis)advantages of either approach?
External vs internal DSL The parser specification format for Happy is an external DSL, whereas with Parsec you have the full power of Haskell available when defining your parsers. This means that you can for example write functions to generate parsers, use Template Haskell and so on. Precedence rules With Happy, you can use precedences to simplify your grammar, whereas with Parsec you have to nest the grammar rules correctly yourself. Changing the precedence of an operator is therefore much more tedious in Parsec. Static checking Happy will warn you about ambiguities in your grammar at compile time. (Though it's not great at telling you where they are.) With Parsec, you get no warning until your parser fails at run time.
Parsec
7,269,025
28
I'm trying to make a parser for a simple functional language, a bit like Caml, but I seem to be stuck with the simplest things. So I'd like to know if there are some more complete examples of parsec parsers, something that goes beyond "this is how you parse 2 + 3". Especially function calls in terms and suchlike. And I've read "Write you a Scheme", but the syntax of scheme is quite simple and not really helping for learning. The most problems I have is how to use try, <|> and choice properly, because I really don't get why parsec never seems to parse a(6) as a function call using this parser: expr = choice [number, call, ident] number = liftM Number float <?> "Number" ident = liftM Identifier identifier <?> "Identifier" call = do name <- identifier args <- parens $ commaSep expr return $ FuncCall name args <?> "Function call" EDIT Added some code for completion, though this is actually not the thing I asked: AST.hs module AST where data AST = Number Double | Identifier String | Operation BinOp AST AST | FuncCall String [AST] deriving (Show, Eq) data BinOp = Plus | Minus | Mul | Div deriving (Show, Eq, Enum) Lexer.hs module Lexer ( identifier, reserved, operator, reservedOp, charLiteral, stringLiteral, natural, integer, float, naturalOrFloat, decimal, hexadecimal, octal, symbol, lexeme, whiteSpace, parens, braces, angles, brackets, semi, comma, colon, dot, semiSep, semiSep1, commaSep, commaSep1 ) where import Text.Parsec import qualified Text.Parsec.Token as P import Text.Parsec.Language (haskellStyle) lexer = P.makeTokenParser haskellStyle identifier = P.identifier lexer reserved = P.reserved lexer operator = P.operator lexer reservedOp = P.reservedOp lexer charLiteral = P.charLiteral lexer stringLiteral = P.stringLiteral lexer natural = P.natural lexer integer = P.integer lexer float = P.float lexer naturalOrFloat = P.naturalOrFloat lexer decimal = P.decimal lexer hexadecimal = P.hexadecimal lexer octal = P.octal lexer symbol = P.symbol lexer lexeme = P.lexeme lexer whiteSpace = P.whiteSpace lexer parens = P.parens lexer braces = P.braces lexer angles = P.angles lexer brackets = P.brackets lexer semi = P.semi lexer comma = P.comma lexer colon = P.colon lexer dot = P.dot lexer semiSep = P.semiSep lexer semiSep1 = P.semiSep1 lexer commaSep = P.commaSep lexer commaSep1 = P.commaSep1 lexer Parser.hs module Parser where import Control.Monad (liftM) import Text.Parsec import Text.Parsec.String (Parser) import Lexer import AST expr = number <|> callOrIdent number = liftM Number float <?> "Number" callOrIdent = do name <- identifier liftM (FuncCall name) (parens $ commaSep expr) <|> return (Identifier name)
Hmm, *Expr> parse expr "" "a(6)" Right (FuncCall "a" [Number 6.0]) that part works for me after filling out the missing pieces. Edit: I filled out the missing pieces by writing my own float parser, which could parse integer literals. The float parser from Text.Parsec.Token on the other hand, only parses literals with a fraction part or an exponent, so it failed parsing the "6". However, *Expr> parse expr "" "variable" Left (line 1, column 9): unexpected end of input expecting "(" when call fails after having parsed an identifier, that part of the input is consumed, hence ident isn't tried, and the overall parse fails. You can a) make it try call in the choice list of expr, so that call fails without consuming input, or b) write a parser callOrIdent to use in expr, e.g. callOrIdent = do name <- identifier liftM (FuncCall name) (parens $ commaSep expr) <|> return (Identifier name) which avoids try and thus may perform better.
Parsec
8,218,529
25
What is the difference between "try" and "lookAhead" functions in parsec?
The combinators try and lookAhead are similar in that they both let Parsec "rewind", but they apply in different circumstances. In particular, try rewinds failure while lookAhead rewinds success. By the documentation, try "pretends that it hasn't consumed any input when an error occurs" while lookAhead p "parses p without consuming any input", but "if p fails and consumes some input, so does lookAhead". So if you think of a parser running as walking along some streaming state and either failing or succeeding, which we might write in Haskell terms as type Parser a = [Tokens] -> (Either Error a, [Tokens]) then try ensures that if (try p) input ---> (Left err, output) then input == output and lookAhead has it such that (lookAhead p) input ---> (Right a, output) then input == output, but if (lookAhead p) input ---> (Left err, output) then they may be allowed to differ. We can see this in action by looking at the code for Parsec directly, which is somewhat more complex than my notion of Parser above. First we examine ParsecT newtype ParsecT s u m a = ParsecT {unParser :: forall b . State s u -> (a -> State s u -> ParseError -> m b) -- consumed ok -> (ParseError -> m b) -- consumed err -> (a -> State s u -> ParseError -> m b) -- empty ok -> (ParseError -> m b) -- empty err -> m b } ParsecT is a continuation-based datatype. If you look at how one of them is constructed ParsecT $ \s cok cerr eok eerr -> ... You'll see how we have access to the State s u, s, and four functions which determine how we move forward. For instance, the fail clause of ParsecT's Monad instance uses the eerr option, constructing a ParseError from the current input position and the passed error message. parserFail :: String -> ParsecT s u m a parserFail msg = ParsecT $ \s _ _ _ eerr -> eerr $ newErrorMessage (Message msg) (statePos s) While the most primitive successful token parse (tokenPrim) uses a complex sequence of events eventually culminating in calling cok with an updated State s u. With this intuition, the source for try is particularly simple. try :: ParsecT s u m a -> ParsecT s u m a try p = ParsecT $ \s cok _ eok eerr -> unParser p s cok eerr eok eerr It simply builds a new ParsecT based on the one passed to try, but with the "empty err" continuation in place of the consumed error. Whatever parsing combinator next sees try p will be unable to access its actual "consumed err" continuation and thus try is protected from changing its state on errors. But lookAhead is more sophisticated lookAhead :: (Stream s m t) => ParsecT s u m a -> ParsecT s u m a lookAhead p = do{ state <- getParserState ; x <- p' ; setParserState state ; return x } where p' = ParsecT $ \s cok cerr eok eerr -> unParser p s eok cerr eok eerr Examining just the where-clause we see it depends on modifying the passed parser p to use the "empty ok" continuation in place of the "consumed ok" continuation. This is symmetric to what try did. Further, it ensures that the parser state is unaffected by whatever happens when this modified p' is run via its do-block.
Parsec
20,020,350
24
I'm just starting with Parsec (having little experience in Haskell), and I'm a little confused about using monads or applicatives. The overall feel I had after reading "Real World Haskell", "Write You a Haskell" and a question here is that applicatives are preferred, but really I have no idea. So my questions are: What approach is preferred? Can monads and applicatives be mixed (use them when they are more useful than the other) If the last answer is yes, should I do it?
It might be worth paying attention to the key semantic difference between Applicative and Monad, in order to determine when each is appropriate. Compare types: (<*>) :: m (s -> t) -> m s -> m t (>>=) :: m s -> (s -> m t) -> m t To deploy <*>, you choose two computations, one of a function, the other of an argument, then their values are combined by application. To deploy >>=, you choose one computation, and you explain how you will make use of its resulting values to choose the next computation. It is the difference between "batch mode" and "interactive" operation. When it comes to parsing, Applicative (extended with failure and choice to give Alternative) captures the context-free aspects of your grammar. You will need the extra power that Monad gives you only if you need to inspect the parse tree from part of your input in order to decide what grammar you should use for another part of your input. E.g., you might read a format descriptor, then an input in that format. Minimizing your usage of the extra power of monads tells you which value-dependencies are essential. Shifting from parsing to parallelism, this idea of using >>= only for essential value-dependency buys you clarity about opportunities to spread load. When two computations are combined with <*>, neither need wait for the other. Applicative-when-you-can-but-monadic-when-you-must is the formula for speed. The point of ApplicativeDo is to automate the dependency analysis of code which has been written in monadic style and thus accidentally oversequentialised. Your question also relates to coding style, about which opinions are free to differ. But let me tell you a story. I came to Haskell from Standard ML, where I was used to writing programs in direct style even if they did naughty things like throw exceptions or mutate references. What was I doing in ML? Working on an implementation of an ultra-pure type theory (which may not be named, for legal reasons). When working in that type theory, I couldn't write direct-style programs which used exceptions, but I cooked up the applicative combinators as a way of getting as close to direct style as possible. When I moved to Haskell, I was horrified to discover the extent to which people seemed to think that programming in pseudo-imperative do-notation was just punishment for the slightest semantic impurity (apart, of course, from non-termination). I adopted the applicative combinators as a style choice (and went even closer to direct style with "idiom brackets") long before I had a grasp of the semantic distinction, i.e., that they represented a useful weakening of the monad interface. I just didn't (and still don't) like the way do-notation requires fragmentation of expression structure and the gratuitous naming of things. That's to say, the same things that make functional code more compact and readable than imperative code also make applicative style more compact and readable than do-notation. I appreciate that ApplicativeDo is a great way to make more applicative (and in some cases that means faster) programs that were written in monadic style that you haven't the time to refactor. But otherwise, I'd argue applicative-when-you-can-but-monadic-when-you-must is also the better way to see what's going on.
Parsec
38,707,813
23
I'm trying to get data from a webpage that serves a XML file periodically with stock market quotes (sample data). The structure of the XML is very simple, and is something like this: <?xml version="1.0"?> <Contents> <StockQuote Symbol="PETR3" Date="21-12-2010" Time="13:20" Price="23.02" /> </Contents> (it's more than that but this suffices as an example). I'd like to parse it to a data structure: data Quote = Quote { symbol :: String, date :: Data.Time.Calendar.Day, time :: Data.Time.LocalTime.TimeOfDay, price :: Float} I understand more or less how Parsec works (on the level of the Real World Haskell book), and I tried a bit the Text.XML library but all I could develop was a code that worked but is too big for such a simple task and looks like a half baked hack and not the best one could do. I don't know a lot about parsers and XML (I know basically what I read in the RWH book, I never used parsers before) (I just do statistical and numerical programming, I'm not a computer scientist). Is there a XML parsing library where I could just tell what is the model and extract the information right away, without having to parse each element by hand and without having to parse pure string? I'm thinking about something like: myParser = do cont <- openXMLElem "Contents" quote <- openXMLElem "StockQuote" symb <- getXMLElemField "Symbol" date <- getXMLElemField "Date" (...) closequote <- closeXMLElem "StockQuote" closecont <- closeXMLElem "Contents" return (symb, date) results = parse myParser "" myXMLString where I wouldn't have to deal with the pure string and create the combinators myself (I suck at it). EDIT: I probably need to read a bit (just enough to get this done the right way) about parsers in general (not only Parsec) and the minimum about XML. Do you guys recomend something? The real string I have to parse is this: stringTest = "<?xml version=\"1.0\"?>\r\n<ComportamentoPapeis><Papel Codigo=\"PETR3\" Nome=\"PETROBRAS ON\" Ibovespa=\"#\" Data=\"05/01/201100:00:00\" Abertura=\"29,80\" Minimo=\"30,31\" Maximo=\"30,67\" Medio=\"30,36\" Ultimo=\"30,45\" Oscilacao=\"1,89\" Minino=\"29,71\"/></ComportamentoPapeis>\r\n" EDIT2: I tried the following (readFloat, readQuoteTime, etc... are just functions to read things from strings). bvspaParser :: (ArrowXml a) => a XmlTree Quote bvspaParser = hasName "ComportamentoPapeis" /> hasName "Papel" >>> proc x -> do (hour,date) <- readQuoteTime ^<< getAttrValue "Data" -< x quoteCode <- getAttrValue "Codigo" -< x openPrice <- readFloat ^<< getAttrValue "Abertura" -< x minim <- readFloat ^<< getAttrValue "Minimo" -< x maxim <- readFloat ^<< getAttrValue "Maximo" -< x ultimo <- readFloat ^<< getAttrValue "Ultimo" -< x returnA -< Quote quoteCode (LocalTime date hour) openPrice minim maxim ultimo docParser :: String -> IO [Quote] docParser str = runX $ readString [] str >>> (parseXmlDocument False) >>> bvspaParser When I call it in ghci: *Main> docParser stringTest >>= print [] Is anything wrong?
There are plenty of XML libraries written for Haskell that can do the parsing for you. I recommend the library called xml (see http://hackage.haskell.org/package/xml). With it, you can simply write e.g.: let contents = parseXML source quotes = concatMap (findElements $ unqual "StockQuote") (onlyElems contents) symbols = map (findAttr $ unqual "Symbol") quotes print symbols This snippet prints [Just "PETR3"] as a result for your example XML, and it's easy to extend for collecting all the data you need. To write the program in the style you describe you should use the Maybe monad, as the xml lookup functions often return a Maybe String, signaling whether the tag, element or attribute could be found. Also see a related question: Which Haskell XML library to use?
Parsec
4,619,206
18
The indents package for Haskell's Parsec provides a way to parse indentation-style languages (like Haskell and Python). It redefines the Parser type, so how do you use the token parser functions exported by Parsec's Text.Parsec.Token module, which are of the normal Parser type? Background Parsec is a parser combinator library, whatever that means. IndentParser 0.2.1 is an old package providing the two modules Text.ParserCombinators.Parsec.IndentParser and Text.ParserCombinators.Parsec.IndentParser.Token indents 0.3.3 is a new package providing the single module Text.Parsec.Indent Parsec comes with a load of modules. most of them export a bunch of useful parsers (e.g. newline from Text.Parsec.Char, which parses a newline) or parser combinators (e.g. count n p from Text.Parsec.Combinator, which runs the parser p, n times) However, the module Text.Parsec.Token would like to export functions which are parametrized by the user with features of the language being parsed, so that, for example, the braces p function will run the parser p after parsing a '{' and before parsing a '}', ignoring things like comments, the syntax of which depends on your language. The way that Text.Parsec.Token achieves this is that it exports a single function makeTokenParser, which you call, giving it the parameters of your specific language (like what a comment looks like) and it returns a record containing all of the functions in Text.Parsec.Token, adapted to your language as specified. Of course, in an indentation-style language, these would need to be adapted further (perhaps? here's where I'm not sure – I'll explain in a moment) so I note that the (presumably obsolete) IndentParser package provides a module Text.ParserCombinators.Parsec.IndentParser.Token which looks to be a drop-in replacement for Text.Parsec.Token. I should mention at some point that all the Parsec parsers are monadic functions, so they do magic things with state so that error messages can say at what line and column in the source file the error appeared My Problem For a couple of small reasons it appears to me that the indents package is more-or-less the current version of IndentParser, however it does not provide a module that looks like Text.ParserCombinators.Parsec.IndentParser.Token, it only provides Text.Parsec.Indent, so I am wondering how one goes about getting all the token parsers from Text.Parsec.Token (like reserved "something" which parses the reserved keyword "something", or like braces which I mentioned earlier). It would appear to me that (the new) Text.Parsec.Indent works by some sort of monadic state magic to work out at what column bits of source code are, so that it doesn't need to modify the token parsers like whiteSpace from Text.Parsec.Token, which is probably why it doesn't provide a replacement module. But I am having a problem with types. You see, without Text.Parsec.Indent, all my parsers are of type Parser Something where Something is the return type and Parser is a type alias defined in Text.Parsec.String as type Parser = Parsec String () but with Text.Parsec.Indent, instead of importing Text.Parsec.String, I use my own definition type Parser a = IndentParser String () a which makes all my parsers of type IndentParser String () Something, where IndentParser is defined in Text.Parsec.Indent. but the token parsers that I'm getting from makeTokenParser in Text.Parsec.Token are of the wrong type. If this isn't making much sense by now, it's because I'm a bit lost. The type issue is discussed a bit here. The error I'm getting is that I've tried replacing the one definition of Parser above with the other, but then when I try to use one of the token parsers from Text.Parsec.Token, I get the compile error Couldn't match expected type `Control.Monad.Trans.State.Lazy.State Text.Parsec.Pos.SourcePos' with actual type `Data.Functor.Identity.Identity' Expected type: P.GenTokenParser String () (Control.Monad.Trans.State.Lazy.State Text.Parsec.Pos.SourcePos) Actual type: P.TokenParser () Links Parsec IndentParser (old package) indents, providing Text.Parsec.Indent (new package) some discussion of Parser types with example code another example of using Text.Parsec.Indent Sadly, neither of the examples above use token parsers like those in Text.Parsec.Token.
What are you trying to do? It sounds like you want to have your parsers defined everywhere as being of type Parser Something (where Something is the return type) and to make this work by hiding and redefining the Parser type which is normally imported from Text.Parsec.String or similar. You still need to import some of Text.Parsec.String, to make Stream an instance of a monad; do this with the line: import Text.Parsec.String () Your definition of Parser is correct. Alternatively and equivalently (for those following the chat in the comments) you can use import Control.Monad.State import Text.Parsec.Pos (SourcePos) type Parser = ParsecT String () (State SourcePos) and possibly do away with the import Text.Parsec.Indent (IndentParser) in the file in which this definition appears. Error, error on the wall Your problem is that you're looking at the wrong part of the compiler error message. You're focusing on Couldn't match expected type `State SourcePos' with actual type `Identity' when you should be focusing on Expected type: P.GenTokenParser ... Actual type: P.TokenParser ... It compiles! Where you "import" parsers from Text.Parsec.Token, what you actually do, of course (as you briefly mentioned) is first to define a record your language parameters and then to pass this to the function makeTokenParser, which returns a record containing the token parsers. You must therefore have some lines that look something like this: import qualified Text.Parsec.Token as P beetleDef :: P.LanguageDef st beetleDef = haskellStyle { parameters, parameters etc. } lexer :: P.TokenParser () lexer = P.makeTokenParser beetleDef ... but a P.LanguageDef st is just a GenLanguageDef String st Identity, and a P.TokenParser () is really a GenTokenParser String () Identity. You must change your type declarations to the following: import Control.Monad.State import Text.Parsec.Pos (SourcePos) import qualified Text.Parsec.Token as P beetleDef :: P.GenLanguageDef String st (State SourcePos) beetleDef = haskellStyle { parameters, parameters etc. } lexer :: P.GenTokenParser String () (State SourcePos) lexer = P.makeTokenParser beetleDef ... and that's it! This will allow your "imported" token parsers to have type ParsecT String () (State SourcePos) Something, instead of Parsec String () Something (which is an alias for ParsecT String () Identity Something) and your code should now compile. (For maximum generality, I'm assuming that you might be defining the Parser type in a file separate from, and imported by, the file in which you define your actual parser functions. Hence the two repeated import statements.) Thanks Many thanks to Daniel Fischer for helping me with this.
Parsec
15,315,624
18
Using Control.Applicative is very useful with Parsec, but you need to always hide <|> and similar objects as they conflict with Parsec's own: import Control.Applicative hiding ((<|>), many, optional) import Text.Parsec.Combinator import Text.Parsec Alternatively, as Antal S-Z points out, you can hide the Parsec version. However, as far as I can tell, this seems like an unnecessary restriction. Why did parsec not simply implement these operators from Applicative?
It's for historic reasons. The Parsec library predates the discovery of applicative functors and so it wasn't designed with them in mind. And I guess no one has taken the time to update Parsec to use Control.Applicative. There is no deep fundamental reason for not doing it.
Parsec
15,767,978
18
I'm parsing an expression using Parsec and I want to keep track of variables in these expressions using the user state in Parsec. Unfortunately I don't really get how to do it. Given the following code: import Data.Set as Set inp = "$x = $y + $z" data Var = V String var = do char '$' n <- many1 letter let v = Var n -- I want to modify the set of variables here return v parseAssignment = ... -- parses the above assignment run = case runIdentity $ runParserT parseAssignment Set.empty "" inp of Left err -> ... Right -> ... So, the u in ParsecT s u m a would be Set.Set. But how would I integrate the state update into var? I tried something like modify $ Set.insert v, but this doesn't work, since Set.Set is not a state monad.
Unfortunately, Yuras' suggestion of updateParserState is not optimal (you'd use that function if you're looking to modify Parsec's internal state as well); instead you should pass a function that works over your custom user state (i.e. of type u -> u) to modifyState, such as in this example: expr = do x <- identifier modifyState (+1) -- ^ in this example, our type u is Int return (Id x) or use any combination of the getState and putState functions. For your case, you'd do something like: modifyState (Set.insert v) See this link for more info. For a more tutorial-like introduction to working with user state in Parsec, this document, though old, should be relevant.
Parsec
6,477,541
17
The documentation for Parsec.Expr.buildExpressionParser says: Prefix and postfix operators of the same precedence can only occur once (i.e. --2 is not allowed if - is prefix negate). and indeed, this is biting me, since the language I am trying to parse allows arbitrary repetition of its prefix and postfix operators (think of a C expression like **a[1][2]). So, why does Parsec make this restriction, and how can I work around it? I think I can move my prefix/postfix parsers down into the term parser since they have the highest precedence. i.e. **a + 1 is parsed as (*(*(a)))+(1) but what could I have done if I wanted it to parse as *(*((a)+(1))) if buildExpressionParser did what I want, I could simply have rearranged the order of the operators in the table. Note See here for a better solution
I solved it myself by using chainl1: prefix p = Prefix . chainl1 p $ return (.) postfix p = Postfix . chainl1 p $ return (flip (.)) These combinators use chainl1 with an op parser that always succeeds, and simply composes the functions returned by the term parser in left-to-right or right-to-left order. These can be used in the buildExprParser table; where you would have done this: exprTable = [ [ Postfix subscr , Postfix dot ] , [ Prefix pos , Prefix neg ] ] you now do this: exprTable = [ [ postfix $ choice [ subscr , dot ] ] , [ prefix $ choice [ pos , neg ] ] ] in this way, buildExprParser can still be used to set operator precedence, but now only sees a single Prefix or Postfix operator at each precedence. However, that operator has the ability to slurp up as many copies of itself as it can, and return a function which makes it look as if there were only a single operator.
Parsec
10,475,337
17
I'm writing my first program with Parsec. I want to parse MySQL schema dumps and would like to come up with a nice way to parse strings representing certain keywords in case-insensitive fashion. Here is some code showing the approach I'm using to parse "CREATE" or "create". Is there a better way to do this? An answer that doesn't resort to buildExpressionParser would be best. I'm taking baby steps here. p_create_t :: GenParser Char st Statement p_create_t = do x <- (string "CREATE" <|> string "create") xs <- manyTill anyChar (char ';') return $ CreateTable (x ++ xs) [] -- refine later
You can build the case-insensitive parser out of character parsers. -- Match the lowercase or uppercase form of 'c' caseInsensitiveChar c = char (toLower c) <|> char (toUpper c) -- Match the string 's', accepting either lowercase or uppercase form of each character caseInsensitiveString s = try (mapM caseInsensitiveChar s) <?> "\"" ++ s ++ "\""
Parsec
12,937,325
17
I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language. Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options. In Haskell I would use Parsec, a library of parsing functions (known as combinators). Is there a Python implementation of Parsec? Or perhaps some other production quality library full of parsing functionality so I can build a context sensitive parser in Python? EDIT: All my attempts at context free parsing have failed. For this reason, I don't expect ANTLR to be useful here.
I believe that pyparsing is based on the same principles as parsec.
Parsec
94,952
16
I'm trying to get this trivial parsec code to compile import Text.Parsec simple = letter but I keep getting this error No instance for (Stream s0 m0 Char) arising from a use of `letter' Possible fix: add an instance declaration for (Stream s0 m0 Char) In the expression: letter In an equation for `simple': simple = letter
I think you have ran against the monomorphism restriction. This restriction means: If a variable is declared with no explicit arguments, its type has to be monomorphic. This forces the typechecker to pick a particular instance of Stream, but it can't decide. There are two ways to fight it: Give simple an explicit signature: simple :: Stream s m Char => ParsecT s u m Char simple = letter Disable the monorphism restriction: {-# LANGUAGE NoMonomorphismRestriction #-} import Text.Parsec simple = letter See What is the monomorphism restriction? for more information on the monomorphism restriction.
Parsec
6,723,208
16
One common problem I have with Parsec is that it tends to ignore invalid input if it occurs in the "right" place. As a concrete example, suppose we have integer :: Parser Int, and I write expression = sepBy integer (char '+') (Ignore whitespace issues for a moment.) This correctly parses something like "123+456+789". However, if I feed it "123+456-789", it merrily ignores the illegal "-" character and the trailing part of the expression; I actually wanted an error message telling me about the invalid input, not just having it silently ignore that part. I understand why this happens; what I'm not sure about is how to fix it. What is the general method for designing parsers that consume all supplied input and succeed only if all of it is a valid expression?
It's actually pretty simple--just ensure it's followed by eof: parse (expression <* eof) "<interactive>" "123+456-789" eof matches the end of the input, even if the input is just a string and not a file. Obviously, this only makes sense at the top level of your parser.
Parsec
16,209,278
16
I'm looking for a good ocaml parsing library that isn't a derivative of flex/bison. Ideally, I'd like a monadic combinator library along the lines of parsec, but I can't find anything. I would use haskell, but making llvm bindings for haskell is proving more tiresome than I originally thought. Cheers, Duane
Here's one library, via Google. (Which also brought up this and this, which lists several more relevant-sounding libraries.) When I wrote a combinator parser in ML, it turned out rather cumbersome to use because of the value restriction and eager evaluation, which forced you to eta-expand your grammar rules. Ocaml is said to be more relaxed about the value restriction, though -- maybe you'll be spared some of that pain.
Parsec
307,499
14
I'm trying to parse an indentation-based language (think Python, Haskell itself, Boo, YAML) in Haskell using Parsec. I've seen the IndentParser library, and it looks like it's the perfect match, but what I can't figure out is how to make my TokenParser into an indentation parser. Here's the code I have so far: import qualified Text.ParserCombinators.Parsec.Token as T import qualified Text.ParserCombinators.Parsec.IndentParser.Token as IT lexer = T.makeTokenParser mylangDef ident = IT.identifier lexer This throws the error: parser2.hs:29:28: Couldn't match expected type `IT.TokenParser st' against inferred type `T.GenTokenParser s u m' In the first argument of `IT.identifier', namely `lexer' In the expression: IT.identifier lexer In the definition of `ident': ident = IT.identifier lexer What am I doing wrong? How should I create an IT.TokenParser? Or is IndentParser broken and to be avoided?
It looks like you're using Parsec 3 here, while IndentParser expects Parsec 2. Your example compiles for me with -package parsec-2.1.0.1. So IndentParser isn't necessarily broken, but the author(s) should have been more specific about versions in the list of dependencies. It's possible to have both versions of Parsec installed, so there's no reason you shouldn't use IndentParser unless you're committed to using Parsec 3 for other reasons. UPDATE: Actually no changes to the source are necessary to get IdentParser working with Parsec 3. The problem that both of us were having seems to be caused by the fact that cabal-install has a "soft preference" for Parsec 2. You can simply reinstall IndentParser with an explicit constraint on the Parsec version: cabal install IndentParser --reinstall --constraint="parsec >= 3" Alternatively you can download the source and build and install in the normal way.
Parsec
3,023,439
14
I was trying to use ghc-7.10 (RC 2) and got this message in a number of cases, e.g., src/Text/Regex/XMLSchema/Generic/RegexParser.hs:439:5: Non type-variable argument in the constraint: Text.Parsec.Prim.Stream s m Char (Use FlexibleContexts to permit this) When checking that ‘prop’ has the inferred type prop :: forall s u (m :: * -> *) (t :: * -> *). (Foldable t, Text.Parsec.Prim.Stream s m Char) => Char -> t Char -> Text.Parsec.Prim.ParsecT s u m [Char] In an equation for ‘isCategory'’: isCategory' = (foldr1 (<|>) . map (uncurry prop) $ [('L', "ultmo"), ('M', "nce"), ('N', "dlo"), ....]) <?> "illegal Unicode character property" where prop c1 cs2 = do { _ <- char c1; .... } Failed to install hxt-regex-xmlschema-9.2.0 This must be something that is introduced by the new ghc, or the new base that comes with it, or the new parsec (3.1.8), since it worked before. source code snippet: isCategory' :: Parser String isCategory' = ( foldr1 (<|>) . map (uncurry prop) $ [ ('L', "ultmo") , ('M', "nce") , ('N', "dlo") , ('P', "cdseifo") , ('Z', "slp") , ('S', "mcko") , ('C', "cfon") ] ) <?> "illegal Unicode character property" where prop c1 cs2 = do _ <- char c1 s2 <- option "" ( do c2 <- satisfy (`elem` cs2) return [c2] ) return $ c1:s2 Note: I am not asking about this specific libray (hxt-*) since I observed this in other places also.
This was a change introduced in GHC 7.10.1-rc1: GHC now checks that all the language extensions required for the inferred type signatures are explicitly enabled. This means that if any of the type signatures inferred in your program requires some language extension you will need to enable it. The motivation is that adding a missing type signature inferred by GHC should yield a program that typechecks. Previously this was not the case. This is a breaking change. Code that used to compile in the past might fail with an error message requiring some particular language extension (most likely -XTypeFamilies, -XGADTs or -XFlexibleContexts).
Parsec
28,287,329
14
All of the parsers in Text.Parsec.Token politely use lexeme to eat whitespace after a token. Unfortunately for me, whitespace includes new lines, which I want to use as expression terminators. Is there a way to convince lexeme to leave a new line?
No, it is not. Here is the relevant code. From Text.Parsec.Token: lexeme p = do{ x <- p; whiteSpace; return x } --whiteSpace whiteSpace | noLine && noMulti = skipMany (simpleSpace <?> "") | noLine = skipMany (simpleSpace <|> multiLineComment <?> "") | noMulti = skipMany (simpleSpace <|> oneLineComment <?> "") | otherwise = skipMany (simpleSpace <|> oneLineComment <|> multiLineComment <?> "") where noLine = null (commentLine languageDef) noMulti = null (commentStart languageDef) One will notice in the where clause of whitespace that the only only options looked at deal with comments. The lexeme function uses whitespace and it is used liberally in the rest of parsec.token. Update Sept. 28, 2015 The ultimate solution for me was to use a proper lexical analyser (alex). Parsec does a very good job as a parsing library and it is a credit to the design that it can be mangled into doing lexical analysis, but for all but small and simple projects it will quickly become unwieldy. I now use alex to create a linear set of tokens and then Parsec turns them into an AST.
Parsec
5,672,142
13
I'm looking at this library, which has little documentation: https://pythonhosted.org/parsec/#examples I understand there are alternatives, but I'd like to use this library. I have the following string I'd like to parse: mystr = """ <kv> key1: "string" key2: 1.00005 key3: [1,2,3] </kv> <csv> date,windspeed,direction 20190805,22,NNW 20190805,23,NW 20190805,20,NE </csv>""" While I'd like to parse the whole thing, I'd settle for just grabbing the <tags>. I have: >>> import parsec >>> tag_start = parsec.Parser(lambda x: x == "<") >>> tag_end = parsec.Parser(lambda x: x == ">") >>> tag_name = parsec.Parser(parsec.Parser.compose(parsec.many1, parsec.letter)) >>> tag_open = parsec.Parser(parsec.Parser.joint(tag_start, tag_name, tag_end)) OK, looks good. Now to use it: >>> tag_open.parse(mystr) Traceback (most recent call last): ... TypeError: <lambda>() takes 1 positional argument but 2 were given This fails. I'm afraid I don't even understand what it meant about my lambda expression giving two arguments, it's clearly 1. How can I proceed? My optimal desired output for all the bonus points is: [ {"type": "tag", "name" : "kv", "values" : [ {"key1" : "string"}, {"key2" : 1.00005}, {"key3" : [1,2,3]} ] }, {"type" : "tag", "name" : "csv", "values" : [ {"date" : 20190805, "windspeed" : 22, "direction": "NNW"} {"date" : 20190805, "windspeed" : 23, "direction": "NW"} {"date" : 20190805, "windspeed" : 20, "direction": "NE"} ] } The output I'd settle for understanding in this question is using functions like those described above for start and end tags to generate: [ {"tag": "kv"}, {"tag" : "csv"} ] And simply be able to parse arbitrary xml-like tags out of the messy mixed text entry.
I encourage you to define your own parser using those combinators, rather than construct the Parser directly. If you want to construct a Parser by wrapping a function, as the documentation states, the fn should accept two arguments, the first is the text and the second is the current position. And fn should return a Value by Value.success or Value.failure, rather than a boolean. You can grep @Parser in the parsec/__init__.py in this package to find more examples of how it works. For your case in the description, you could define the parser as follows: from parsec import * spaces = regex(r'\s*', re.MULTILINE) name = regex(r'[_a-zA-Z][_a-zA-Z0-9]*') tag_start = spaces >> string('<') >> name << string('>') << spaces tag_stop = spaces >> string('</') >> name << string('>') << spaces @generate def header_kv(): key = yield spaces >> name << spaces yield string(':') value = yield spaces >> regex('[^\n]+') return {key: value} @generate def header(): tag_name = yield tag_start values = yield sepBy(header_kv, string('\n')) tag_name_end = yield tag_stop assert tag_name == tag_name_end return { 'type': 'tag', 'name': tag_name, 'values': values } @generate def body(): tag_name = yield tag_start values = yield sepBy(sepBy1(regex(r'[^\n<,]+'), string(',')), string('\n')) tag_name_end = yield tag_stop assert tag_name == tag_name_end return { 'type': 'tag', 'name': tag_name, 'values': values } parser = header + body If you run parser.parse(mystr), it yields ({'type': 'tag', 'name': 'kv', 'values': [{'key1': '"string"'}, {'key2': '1.00005'}, {'key3': '[1,2,3]'}]}, {'type': 'tag', 'name': 'csv', 'values': [['date', 'windspeed', 'direction'], ['20190805', '22', 'NNW'], ['20190805', '23', 'NW'], ['20190805', '20', 'NE']]} ) You can refine the definition of values in the above code to get the result in the exact form you want.
Parsec
57,368,870
13
I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.
One way you can do this is to use the try combinator, which allows a parser to consume input and fail without failing the whole parse. Another is to use Text.ParserCombinators.ReadP, which implements a symmetric choice operator, in which it is proven that a +++ b = b +++ a, so it really doesn't matter which order. I'm rather partial to ReadP, since it is minimal but provides what you need to build up a really powerful parser.
Parsec
2,483,411
12
I'm surprised that I could not find any info on this. I must be the only person having any trouble with it. So, let's say I have a dash counter. I want it to count the number of dashes in the string, and return the string. Pretend I gave an example that won't work using parsec's state handling. So this should work: dashCounter = do str <- many1 dash count <- get return (count,str) dash = do char '-' modify (+1) And indeed, this compiles. Okay, so I try to use it: :t parse dashCounter "" "----" parse dashCounter "" "----" :: (Control.Monad.State.Class.MonadState t Data.Functor.Identity.Identity, Num t) => Either ParseError (t, [Char]) Okay, that makes sense. It should return the state and the string. Cool. >parse dashCounter "" "----" <interactive>:1:7: No instance for (Control.Monad.State.Class.MonadState t0 Data.Functor.Identity.Identity) arising from a use of `dashCounter' Possible fix: add an instance declaration for (Control.Monad.State.Class.MonadState t0 Data.Functor.Identity.Identity) In the first argument of `parse', namely `dashCounter' In the expression: parse dashCounter "" "----" In an equation for `it': it = parse dashCounter "" "----" Oops. But then how could it have ever hoped to work in the first place? There's no way to input the initial state. There is also a function: >runPT dashCounter (0::Int) "" "----" But it gives a similar error. <interactive>:1:7: No instance for (Control.Monad.State.Class.MonadState Int m0) arising from a use of `dashCounter' Possible fix: add an instance declaration for (Control.Monad.State.Class.MonadState Int m0) In the first argument of `runPT', namely `dashCounter' In the expression: runPT dashCounter (0 :: Int) "" "----" In an equation for `it': it = runPT dashCounter (0 :: Int) "" "----" I feel like I should have to runState on it, or there should be a function that already does it internally, but I can't seem to figure out where to go from here. Edit: I should have specified more clearly, I did not want to use parsec's state handling. The reason is I have a feeling I don't want its backtracking to affect what it collects with the problem I'm preparing to solve it with. However, Mr. McCann has figured out how this should fit together and the final code would look like this: dashCounter = do str <- many1 dash count <- get return (count,str) dash = do c <- char '-' modify (+1) return c test = runState (runPT dashCounter () "" "----------") 0 Thanks a lot.
You've actually got multiple problems going on here, all of which are relatively non-obvious the first time around. Starting with the simplest: dash is returning (), which doesn't seem to be what you want given that you're collecting the results. You probably wanted something like dash = char '-' <* modify (+1). (Note that I'm using an operator from Control.Applicative here, because it looks tidier) Next, clearing up a point of confusion: When you get the reasonable-looking type signature in GHCi, note the context of (Control.Monad.State.Class.MonadState t Data.Functor.Identity.Identity, Num t). That's not saying what things are, it's telling you want they need to be. Nothing guarantees that the instances it's asking for exist and, in fact, they don't. Identity is not a state monad! On the other hand, you're absolutely correct in thinking that parse doesn't make sense; you can't use it here. Consider its type: Stream s Identity t => Parsec s () a -> SourceName -> s -> Either ParseError a. As is customary with monad transformers, Parsec is an synonym for ParsecT applied to the identity monad. And while ParsecT does provide user state, you apparently don't want to use it, and ParsecT does not give an instance of MonadState anyhow. Here's the only relevant instance: MonadState s m => MonadState s (ParsecT s' u m). In other words, to treat a parser as a state monad you have to apply ParsecT to some other state monad. This sort of brings us to the next problem: Ambiguity. You're using a lot of type class methods and no type signatures, so you're likely to run into situations where GHC can't know what type you actually want, so you have to tell it. Now, as a quick solution, let's first define a type synonym to give a name to the monad transformer stack we want: type StateParse a = ParsecT String () (StateT Int Identity) a Give dashCounter the relevant type signature: dashCounter :: StateParse (Int, String) dashCounter = do str <- many1 dash count <- get return (count,str) And add a special-purpose "run" function: runStateParse p sn inp count = runIdentity $ runStateT (runPT p () sn inp) count Now, in GHCi: Main> runStateParse dashCounter "" "---" 0 (Right (3,"---"),3) Also, note that it's pretty common to use a newtype around a transformer stack instead of just a type synonym. This can help with the ambiguity issues in some cases, and obviously avoids ending up with gigantic type signatures.
Parsec
6,876,718
12
Mixing the lexer and parsing phases in one phase sometimes makes Parsec parsers less readable but also slows them down. One solution is to use Alex as a tokenizer and then Parsec as a parser of the token stream. This is fine but it would be even better if I could get rid of Alex because it adds one preprocessing phase in the compilation pipeline, doesn't integrate well with haskell "IDEs", etc. I was wondering if there was such a thing as an haskell EDSL for describing tokenizers, very much in the style of Alex, but as a library.
Yes - http://www.cse.unsw.edu.au/~chak/papers/Cha99.html Before Hackage, Manuel used to release the code in a package called CTK (compiler toolkit). I'm not sure what the status of project is these days. I think Thomas Hallgren's lexer from the paper "Lexing Haskell in Haskell" was dynamic rather than a code generator, whilst the release is tailored to lexing Haskell the machinery in the library is more general. Iavor Diatchki has put the code on Hackage. http://hackage.haskell.org/package/haskell-lexer
Parsec
7,751,647
12
I am trying to learn how to use Parsec to write a Delphi parser, but I am getting stuck at defining the LanguageDef. In Delphi, there are two types of comment blocks, (* comments *) and { comments }. But the types of commentStart & commentEnd of LanguageDef are String, not [String], so I could only put in one or the other. So, I tried to make my own whiteSpace parser, but I'm not sure I could actually pass it into makeTokenParser. Any help would be appreciated. Thanks John and Chris have helped me to understand and get around the problem, but the solution involves replacing a huge number of parsers that makeTokenParser provides, so it's not exactly desirable. I will post again if I could find a better solution.
My reading of the Text.ParserCombinators.Parsec.Language file is that this cannot be done directly using a LanguageDef. I believe you are on the right track to write your own whiteSpace parser. In order to use it successfully, you need to overwrite the whiteSpace parser that is generated by makeTokenParser. The TokenParser created by makeTokenParser is a record with each field containing a parser. We can create a new copy of the record with one of those fields replaced as follows: -- ask GCHi for the type actual type signature constraints -- Type sig is approx. fixWhiteSpace :: TokenParser -> Parser -> TokenParser fixWhiteSpace originalTokenParser myWhiteSpaceParser = originalTokenParser {whiteSpace = myWhiteSpaceParser}
Parsec
8,384,292
12
I am looking for some sample grammars written in FParsec that would go beyond the samples in the project repository. I have found this very nice grammar of GLSL, but this is the only sample I found. What I need is a grammar for a language similar to C or JavaScript.
Luca Bolognese has written a great series of Write Yourself a Scheme in 48 Hours in F# where he used FParsec for parsing. The full source code with detailed test cases are online here. The most relevant post is 6th one where he talked about parsing a simple Lisp-like language. This language is closer to JavaScript than to C just so you know. Current series on his blog is parsing lambda expressions in F# (using FParsec) which could be helpful for you too.
Parsec
9,061,862
12
I'm new to Haskell and I am trying to parse expressions. I found out about Parsec and I also found some articles but I don't seem to understand what I have to do. My problem is that I want to give an expression like "x^2+2*x+3" and the result to be a function that takes an argument x and returns a value. I am very sorry if this is an easy question but I really need some help. Thanks! The code I inserted is from the article that you can find on this link. import Control.Monad(liftM) import Text.ParserCombinators.Parsec import Text.ParserCombinators.Parsec.Expr import Text.ParserCombinators.Parsec.Token import Text.ParserCombinators.Parsec.Language data Expr = Num Int | Var String | Add Expr Expr | Sub Expr Expr | Mul Expr Expr | Div Expr Expr | Pow Expr Expr deriving Show expr :: Parser Expr expr = buildExpressionParser table factor <?> "expression" table = [[op "^" Pow AssocRight], [op "*" Mul AssocLeft, op "/" Div AssocLeft], [op "+" Add AssocLeft, op "-" Sub AssocLeft]] where op s f assoc = Infix (do{ string s; return f}) assoc factor = do{ char '(' ; x <- expr ; char ')' ; return x} <|> number <|> variable <?> "simple expression" number :: Parser Expr number = do{ ds<- many1 digit ; return (Num (read ds))} <?> "number" variable :: Parser Expr variable = do{ ds<- many1 letter ; return (Var ds)} <?> "variable"
This is just a parser for expressions with variables. Actually interpreting the expression is an entirely separate matter. You should create a function that takes an already parsed expression and values for variables, and returns the result of evaluating the expression. Pseudocode: evaluate :: Expr -> Map String Int -> Int evaluate (Num n) _ = n evaluate (Var x) vars = {- Look up the value of x in vars -} evaluate (Plus e f) vars = {- Evaluate e and f, and return their sum -} ... I've deliberately omitted some details; hopefully by exploring the missing parts, you learn more about Haskell. As a next step, you should probably look at the Reader monad for a convenient way to pass the variable map vars around, and using Maybe or Error to signal errors, e.g. referencing a variable that is not bound in vars, or division by zero.
Parsec
4,711,893
11
I've decided to check out FParsec, and tried to write a parser for λ expressions. As it turns out, eagerness makes recursive parsing difficult. How can I solve this? Code: open FParsec type λExpr = | Variable of char | Application of λExpr * λExpr | Lambda of char * λExpr let rec FV = function | Variable v -> Set.singleton v | Application (f, x) -> FV f + FV x | Lambda (x, m) -> FV m - Set.singleton x let Λ0 = FV >> (=) Set.empty let apply f p = parse { let! v = p return f v } let λ e = let expr, exprR = createParserForwardedToRef() let var = lower |> apply Variable let app = tuple2 expr expr |> apply Application let lam = pipe2 (pchar 'λ' >>. many lower) (pchar '.' >>. expr) (fun vs e -> List.foldBack (fun c e -> Lambda (c, e)) vs e) exprR := choice [ lam app var (pchar '(' >>. expr .>> pchar ')') ] run expr e Thanks!
As you pointed out, the problem is that your parser for application calls expr recursively and so there is an infinite loop. The parser needs to be written such that it always consumes some input and then decides what to do. In case of lambda calculus, the tricky thing is recognizing an application and a variable because if you have input like x... then the first character suggests it could be either of them. You can merge the rules for application and variable like this: let rec varApp = parse { let! first = lower |> apply Variable let! res = choice [ expr |> apply (fun e -> Application(first, e)) parse { return first } ] return res } This first parses a variable and then either parses another expression (in which case it is an application) or it just returns the variable (if there is no expression following the variable). The rest of the rules are similar: and lam = pipe2 (pchar 'λ' >>. many lower) (pchar '.' >>. expr) (fun vs e -> List.foldBack (fun c e -> Lambda (c, e)) vs e) and brac = pchar '(' >>. expr .>> pchar ')' and expr = parse.Delay(fun () -> choice [ lam; varApp; brac ]) I just avoided the need for explicit mutation by using parse.Delay() (which makes it possible to create recursive value references). In principle, it could be written as: and expr = parse { return! choice [ lam; varApp; brac ] } ...but for some reason, FParsec doesn't implement the ReturnFrom member that is needed if you want to use return! in computation expressions.
Parsec
6,186,230
11
I'm trying to parse some Text with parsec: data Cmd = LoginCmd String | JoinCmd String | LeaveCmd String deriving (Show) singleparam :: Parser Cmd singleparam = do cmd <- choice [string "leave", string "login", string "join"] spaces nick <- many1 anyChar eof return $ LoginCmd nick I'm expecting choice to try to match "leave", and if it fails, then try "login" etc. But it only tries to match "leave", and if it fails, then gives an error. ghci> parseTest singleparam (pack "login asdf") parse error at (line 1, column 1): unexpected "o" expecting "leave" ghci> parseTest singleparam (pack "leave asdf") LoginCmd "asdf" What am I doing wrong?
Parsec does not automatically backtrack like this (for efficiency). The rule is that once a branch accepts a token then alternative branches are pruned. The solution is to add an explicit backtracking using try (string "leave") and try (string "login") etc. In your example the 'l' character is the token that commits Parsec to the first "leave" branch and abandons the next branches of "login" and "join". More detail in Real World Haskell (book, online) on parsec.
Parsec
9,976,388
11
I'm having trouble working out how to use any of the functions in the Text.Parsec.Indent module provided by the indents package for Haskell, which is a sort of add-on for Parsec. What do all these functions do? How are they to be used? I can understand the brief Haddock description of withBlock, and I've found examples of how to use withBlock, runIndent and the IndentParser type here, here and here. I can also understand the documentation for the four parsers indentBrackets and friends. But many things are still confusing me. In particular: What is the difference between withBlock f a p and do aa <- a pp <- block p return f aa pp Likewise, what's the difference between withBlock' a p and do {a; block p} In the family of functions indented and friends, what is ‘the level of the reference’? That is, what is ‘the reference’? Again, with the functions indented and friends, how are they to be used? With the exception of withPos, it looks like they take no arguments and are all of type IParser () (IParser defined like this or this) so I'm guessing that all they can do is to produce an error or not and that they should appear in a do block, but I can't figure out the details. I did at least find some examples on the usage of withPos in the source code, so I can probably figure that out if I stare at it for long enough. <+/> comes with the helpful description “<+/> is to indentation sensitive parsers what ap is to monads” which is great if you want to spend several sessions trying to wrap your head around ap and then work out how that's analogous to a parser. The other three combinators are then defined with reference to <+/>, making the whole group unapproachable to a newcomer. Do I need to use these? Can I just ignore them and use do instead? The ordinary lexeme combinator and whiteSpace parser from Parsec will happily consume newlines in the middle of a multi-token construct without complaining. But in an indentation-style language, sometimes you want to stop parsing a lexical construct or throw an error if a line is broken and the next line is indented less than it should be. How do I go about doing this in Parsec? In the language I am trying to parse, ideally the rules for when a lexical structure is allowed to continue on to the next line should depend on what tokens appear at the end of the first line or the beginning of the subsequent line. Is there an easy way to achieve this in Parsec? (If it is difficult then it is not something which I need to concern myself with at this time.)
So, the first hint is to take a look at IndentParser type IndentParser s u a = ParsecT s u (State SourcePos) a I.e. it's a ParsecT keeping an extra close watch on SourcePos, an abstract container which can be used to access, among other things, the current column number. So, it's probably storing the current "level of indentation" in SourcePos. That'd be my initial guess as to what "level of reference" means. In short, indents gives you a new kind of Parsec which is context sensitive—in particular, sensitive to the current indentation. I'll answer your questions out of order. (2) The "level of reference" is the "belief" referred in the current parser context state of where this indentation level starts. To be more clear, let me give some test cases on (3). (3) In order to start experimenting with these functions, we'll build a little test runner. It'll run the parser with a string that we give it and then unwrap the inner State part using an initialPos which we get to modify. In code import Text.Parsec import Text.Parsec.Pos import Text.Parsec.Indent import Control.Monad.State testParse :: (SourcePos -> SourcePos) -> IndentParser String () a -> String -> Either ParseError a testParse f p src = fst $ flip runState (f $ initialPos "") $ runParserT p () "" src (Note that this is almost runIndent, except I gave a backdoor to modify the initialPos.) Now we can take a look at indented. By examining the source, I can tell it does two things. First, it'll fail if the current SourcePos column number is less-than-or-equal-to the "level of reference" stored in the SourcePos stored in the State. Second, it somewhat mysteriously updates the State SourcePos's line counter (not column counter) to be current. Only the first behavior is important, to my understanding. We can see the difference here. >>> testParse id indented "" Left (line 1, column 1): not indented >>> testParse id (spaces >> indented) " " Right () >>> testParse id (many (char 'x') >> indented) "xxxx" Right () So, in order to have indented succeed, we need to have consumed enough whitespace (or anything else!) to push our column position out past the "reference" column position. Otherwise, it'll fail saying "not indented". Similar behavior exists for the next three functions: same fails unless the current position and reference position are on the same line, sameOrIndented fails if the current column is strictly less than the reference column, unless they are on the same line, and checkIndent fails unless the current and reference columns match. withPos is slightly different. It's not just a IndentParser, it's an IndentParser-combinator—it transforms the input IndentParser into one that thinks the "reference column" (the SourcePos in the State) is exactly where it was when we called withPos. This gives us another hint, btw. It lets us know we have the power to change the reference column. (1) So now let's take a look at how block and withBlock work using our new, lower level reference column operators. withBlock is implemented in terms of block, so we'll start with block. -- simplified from the actual source block p = withPos $ many1 (checkIndent >> p) So, block resets the "reference column" to be whatever the current column is and then consumes at least 1 parses from p so long as each one is indented identically as this newly set "reference column". Now we can take a look at withBlock withBlock f a p = withPos $ do r1 <- a r2 <- option [] (indented >> block p) return (f r1 r2) So, it resets the "reference column" to the current column, parses a single a parse, tries to parse an indented block of ps, then combines the results using f. Your implementation is almost correct, except that you need to use withPos to choose the correct "reference column". Then, once you have withBlock, withBlock' = withBlock (\_ bs -> bs). (5) So, indented and friends are exactly the tools to doing this: they'll cause a parse to immediately fail if it's indented incorrectly with respect to the "reference position" chosen by withPos. (4) Yes, don't worry about these guys until you learn how to use Applicative style parsing in base Parsec. It's often a much cleaner, faster, simpler way of specifying parses. Sometimes they're even more powerful, but if you understand Monads then they're almost always completely equivalent. (6) And this is the crux. The tools mentioned so far can only do indentation failure if you can describe your intended indentation using withPos. Quickly, I don't think it's possible to specify withPos based on the success or failure of other parses... so you'll have to go another level deeper. Fortunately, the mechanism that makes IndentParsers work is obvious—it's just an inner State monad containing SourcePos. You can use lift :: MonadTrans t => m a -> t m a to manipulate this inner state and set the "reference column" however you like. Cheers!
Parsec
15,549,050
11
What does the constraint (Stream s Identity t) mean in the following type declaration? parse :: (Stream s Identity t) => Parsec s () a -> SourceName -> s -> Either ParseError a What is Stream in the following class declaration, what does it mean. I'm totally lost. class Monad m => Stream s m t | s -> t where When I use Parsec, I get into a jam with the type-signatures (xxx :: yyy) all the time. I always skip the signatures, load the src into ghci, and then copy the type-signature back to my .hs file. It works, but I still don't understand what all these signatures are. EDIT: more about the point of my question. I'm still confused about the 'context' of type-signature: (Show a) => means a must be a instance of class Show. (Stream s Identity t) => what's the meaning of this 'context', since t never showed after the => I have a lot of different parser to run, so I write a warp function to run any of those parser with real files. but here comes the problem: Here is my code, It cannot be loaded, how can I make it work? module RunParse where import System.IO import Data.Functor.Identity (Identity) import Text.Parsec.Prim (Parsec, parse, Stream) --what should I write "runIOParse :: ..." --runIOParse :: (Stream s Identity t, Show a) => Parsec s () a -> String -> IO () runIOParse pa filename = do inh <- openFile filename ReadMode outh <- openFile (filename ++ ".parseout") WriteMode instr <- hGetContents inh let result = show $ parse pa filename instr hPutStr outh result hClose inh hClose outh
the constraint: (Stream s Identity t) means what? It means that the input s your parser works on (i.e. [Char]) must be an instance of the Stream class. In the documentation you see that [Char] is indeed an instance of Stream, since any list is. The parameter t is the token type, which is normally Char and is determinded by s, as states the functional dependency s -> t. But don't worry too much about this Stream typeclass. It's used only to have a unified interface for any Stream-like type, e.g. lists or ByteStrings. what is Stream A Stream is simply a typeclass. It has the uncons function, which returns the head of the input and the tail in a tuple wrapped in Maybe. Normally you won't need this function. As far as I can see, it's only needed in the most basic parsers like tokenPrimEx. Edit: what's the meaning of this 'context', since t never showed after the => Have a look at functional dependencies. The t never shows after the ´=>´, because it is determiend by s. And it means that you can use uncons on whatever s is. Here is my code, It cannot be loaded, how can I make it work? Simple: Add an import statement for Text.Parsec.String, which defines the missing instance for Stream [tok] m tok. The documentation could be a bit clearer here, because it looks as if this instance was defined in Text.Parsec.Prim. Alternatively import the whole Parsec library (import Text.Parsec) - this is how I always do it.
Parsec
6,370,094
10
In my work I come across a lot of gnarly sql, and I had the bright idea of writing a program to parse the sql and print it out neatly. I made most of it pretty quickly, but I ran into a problem that I don't know how to solve. So let's pretend the sql is "select foo from bar where 1". My thought was that there is always a keyword followed by data for it, so all I have to do is parse a keyword, and then capture all gibberish before the next keyword and store that for later cleanup, if it is worthwhile. Here's the code: import Text.Parsec import Text.Parsec.Combinator import Text.Parsec.Char import Data.Text (strip) newtype Statement = Statement [Atom] data Atom = Branch String [Atom] | Leaf String deriving Show trim str = reverse $ trim' (reverse $ trim' str) where trim' (' ':xs) = trim' xs trim' str = str printStatement atoms = mapM_ printAtom atoms printAtom atom = loop 0 atom where loop depth (Leaf str) = putStrLn $ (replicate depth ' ') ++ str loop depth (Branch str atoms) = do putStrLn $ (replicate depth ' ') ++ str mapM_ (loop (depth + 2)) atoms keywords :: [String] keywords = [ "select", "update", "delete", "from", "where"] keywordparser :: Parsec String u String keywordparser = try ((choice $ map string keywords) <?> "keywordparser") stuffparser :: Parsec String u String stuffparser = manyTill anyChar (eof <|> (lookAhead keywordparser >> return ())) statementparser = do key <- keywordparser stuff <- stuffparser return $ Branch key [Leaf (trim stuff)] <?> "statementparser" tp = parse (many statementparser) "" The key here is the stuffparser. That is the stuff in between the keywords that could be anything from column lists to where criteria. This function catches all characters leading up to a keyword. But it needs something else before it is finished. What if there is a subselect? "select id,(select product from products) from bar". Well in that case if it hits that keyword, it screws everything up, parses it wrong and screws up my indenting. Also where clauses can have parenthesis as well. So I need to change that anyChar into another combinator that slurps up characters one at a time but also tries to look for parenthesis, and if it finds them, traverse and capture all that, but also if there are more parenthesis, do that until we have fully closed the parenthesis, then concatenate it all and return it. Here's what I've tried, but I can't quite get it to work. stuffparser :: Parsec String u String stuffparser = fmap concat $ manyTill somechars (eof <|> (lookAhead keywordparser >> return ())) where somechars = parens <|> fmap (\c -> [c]) anyChar parens= between (char '(') (char ')') somechars This will error like so: > tp "select asdf(qwerty) from foo where 1" Left (line 1, column 14): unexpected "w" expecting ")" But I can't think of any way to rewrite this so that it works. I've tried to use manyTill on the parenthesis part, but I end up having trouble getting it to typecheck when I have both string producing parens and single chars as alternatives. Does anyone have any suggestions on how to go about this?
Yeah, between might not work for what you're looking for. Of course, for your use case, I'd follow hammar's suggestion and grab an off-the-shelf SQL parser. (personal opinion: or, try not to use SQL unless you really have to; the idea to use strings for database queries was imho a historical mistake). Note: I add an operator called <++> which will concatenate the results of two parsers, whether they are strings or characters. (code at bottom.) First, for the task of parsing parenthesis: the top level will parse some stuff between the relevant characters, which is exactly what the code says, parseParen = char '(' <++> inner <++> char ')' Then, the inner function should parse anything else: non-parens, possibly including another set of parenthesis, and non-paren junk that follows. parseParen = char '(' <++> inner <++> char ')' where inner = many (noneOf "()") <++> option "" (parseParen <++> inner) I'll make the assumption that for the rest of the solution, what you want to do is analgous to splitting things up by top-level SQL keywords. (i.e. ignoring those in parenthesis). Namely, we'll have a parser that will behave like so, Main> parseTest parseSqlToplevel "select asdf(select m( 2) fr(o)m w where n) from b where delete 4" [(Select," asdf(select m( 2) fr(o)m w where n) "),(From," b "),(Where," "),(Delete," 4")] Suppose we have a parseKw parser that will get the likes of select, etc. After we consume a keyword, we need to read until the next [top-level] keyword. The last trick to my solution is using the lookAhead combinator to determine whether the next word is a keyword, and put it back if so. If it's not, then we consume a parenthesis or other character, and then recurse on the rest. -- consume spaces, then eat a word or parenthesis parseOther = many space <++> (("" <$ lookAhead (try parseKw)) <|> -- if there's a keyword, put it back! option "" ((parseParen <|> many1 (noneOf "() \t")) <++> parseOther)) My entire solution is as follows -- overloaded operator to concatenate string results from parsers class CharOrStr a where toStr :: a -> String instance CharOrStr Char where toStr x = [x] instance CharOrStr String where toStr = id infixl 4 <++> f <++> g = (\x y -> toStr x ++ toStr y) <$> f <*> g data Keyword = Select | Update | Delete | From | Where deriving (Eq, Show) parseKw = (Select <$ string "select") <|> (Update <$ string "update") <|> (Delete <$ string "delete") <|> (From <$ string "from") <|> (Where <$ string "where") <?> "keyword (select, update, delete, from, where)" -- consume spaces, then eat a word or parenthesis parseOther = many space <++> (("" <$ lookAhead (try parseKw)) <|> -- if there's a keyword, put it back! option "" ((parseParen <|> many1 (noneOf "() \t")) <++> parseOther)) parseSqlToplevel = many ((,) <$> parseKw <*> (space <++> parseOther)) <* eof parseParen = char '(' <++> inner <++> char ')' where inner = many (noneOf "()") <++> option "" (parseParen <++> inner) edit - version with quote support you can do the same thing as with the parens to support quotes, import Control.Applicative hiding (many, (<|>)) import Text.Parsec import Text.Parsec.Combinator -- overloaded operator to concatenate string results from parsers class CharOrStr a where toStr :: a -> String instance CharOrStr Char where toStr x = [x] instance CharOrStr String where toStr = id infixl 4 <++> f <++> g = (\x y -> toStr x ++ toStr y) <$> f <*> g data Keyword = Select | Update | Delete | From | Where deriving (Eq, Show) parseKw = (Select <$ string "select") <|> (Update <$ string "update") <|> (Delete <$ string "delete") <|> (From <$ string "from") <|> (Where <$ string "where") <?> "keyword (select, update, delete, from, where)" -- consume spaces, then eat a word or parenthesis parseOther = many space <++> (("" <$ lookAhead (try parseKw)) <|> -- if there's a keyword, put it back! option "" ((parseParen <|> parseQuote <|> many1 (noneOf "'() \t")) <++> parseOther)) parseSqlToplevel = many ((,) <$> parseKw <*> (space <++> parseOther)) <* eof parseQuote = char '\'' <++> inner <++> char '\'' where inner = many (noneOf "'\\") <++> option "" (char '\\' <++> anyChar <++> inner) parseParen = char '(' <++> inner <++> char ')' where inner = many (noneOf "'()") <++> (parseQuote <++> inner <|> option "" (parseParen <++> inner)) I tried it with parseTest parseSqlToplevel "select ('a(sdf'())b". cheers
Parsec
6,732,272
10
Is it possible somehow to get parse error of some custom type? It would be cool to get more information about parsing context from error for example. And it seems not very convenient to have error info just in the form of text message.
As Rhymoid observes, it is not possible directly, unfortunately. Combining Parsec with your own Either-like monad won't help, too — it will exit too soon (ParsecT over Either) or too late (EitherT over ParsecT). If you want it badly, you can do it like this: use ParsecT over State (SourcePos, YourErrorType). (You can't use Parsec's user state because then the error will be backtracked.) Every time you'd like to emit a structured error value, record it in the state with the current location, but only if the current location is farther than the already recorded one. (If locations are equal, you may want to merge the errors somehow. Maybe keep a list of them.) Finally, when you run your monad stack, you'll be given the final state and a ParseError that contains a SourcePos. Just check that the two locations coincide. If they don't (i.e. the Parsec's SourcePos is farther), then you don't have an error value for this error.
Parsec
16,595,565
10
I was reading a lot about Haskell Parser Combinators and found a lot of topics like: Parsec vs Yacc/Bison/Antlr: Why and when to use Parsec? Which Haskell parsing technology is most pleasant to use, and why? Parsec or happy (with alex) or uu-parsinglib Choosing a Haskell parser What is the advantage of using a parser generator like happy as opposed to using parser combinators? But all these topics compare Parser Combinators with Parser Generators. I want to ask you which of Parser Combinator suits best the following conditions: I want to have good control about the errors (including error recovery) and messages for user I want to be able to fed the parser with small parts of text (not whole file at once) I want to be able to redesign nicely the grammar (I'm currently developing the grammar, so "nice waf of working" is important" The final parser should be fast (the performance is important, but not as much as points 1-3). I've found out, that the most popular parser combinators are: Parsec uu-parsinglib attoparsec
I would say definitely go with Parsec, heres why: Attoparsec is designed to be quick to use, but lacks the strong support for error messages you get in Parsec, so that is a win for your first point. My experience of using parser combinator libraries is that it is really easy to test individual parts of the parsers, either in GHCi or in tests, so the second point is satisfied by all of them really. Lastly, Attoparsec and Parsec are pretty darn fast. Finally, Parsec has been around longest and has many useful and advanced features. This means that general maintainability is going to be easier, more examples are in Parsec and more people are familiar with it. uu-parsinglib is definitely worth the time to explore, but I would suggest that getting familiar with Parsec first is the better course for these reasons. (Alex is also the most recommended lexer to use with Parsec or otherwise, but I have not used it myself.)
Parsec
18,028,220
10
I was expecting to find a function integer :: Stream s m Char => ParsecT s u m Integer or maybe even natural :: Stream s m Char => ParsecT s u m Integer in the standard libraries, but I did not find one. What is the standard way of parsing plain natural numbers directly to an Integer?
Here is what I often do is to use the expression read <$> many1 digit which can have type Stream s m Char => ParsecT s u m Integer (or simply Parser Integer). I don’t like the use of the the partial function read, but when the parser succeeds I know that the read will succeed, and it is somewhat readable.
Parsec
24,171,005
10
I am trying to parse a very simple language that consists of only decimal or binary numbers. For example, here are some valid inputs: #b1 #d1 #b0101 #d1234 I am having a problem using Parsec's choice operator: <|>. According to the tutorial: Write yourself a Scheme in 48 hours: [The choice operator] tries the first parser, then if it fails, tries the second. If either succeeds, then it returns the value returned by that parser.. But in my experience, I see that the order of the parsers supplied matters. Here is my program: import System.Environment import Text.ParserCombinators.Parsec main :: IO () main = do (x:_) <- getArgs putStrLn ( "Hello, " ++ readExp x) bin :: Parser String bin = do string "#b" x <- many( oneOf "01") return x dec :: Parser String dec = do string "#d" x <- many( oneOf "0123456789") return x -- Why does order matter here? parseExp = (bin <|> dec) readExp :: String -> String readExp input = case parse parseExp "test" input of Left error -> "Error: " ++ show error Right val -> "Found val" ++ show val Here is how I am running the program: Installing dependencies $ cabal sandbox init $ cabal install parsec Compiling $ cabal exec ghc Main Run $ ./Main "#d1" Hello, Error: "test" (line 1, column 1): unexpected "d" expecting "#b" $ ./Main "#b1" Hello, Found val"1" If I change the order of the parsers as follows: parseExp = (dec <|> bin) then only binary numbers are detected and the program fails to identify the decimal numbers. With the tests that I have performed, I see this problem only happens when one of the parsers has started parsing an input e.g. if a hash character # is found, the bin parser is activated ending up in failing as the next character expected is b and not d. It seems like there should be some kind of backtracking that should happen, which I am not aware of. Appreciate the help!
Parsec has two kinds of "failure": there are failures that consume input, and failures that don't. To avoid backtracking (and therefore holding onto inputs longer than necessary/being generally unfriendly to the garbage collector), (<|>) commits to the first parser as soon as it consumes input; so that if its first argument consumes input and fails, its second parser never gets a chance to succeed. You can explicitly request backtracking behavior with try, thus: Text.Parsec> parse (string "ab" <|> string "ac") "" "ac" Left (line 1, column 1): unexpected "c" expecting "ab" Text.Parsec> parse (try (string "ab") <|> string "ac") "" "ac" Right "ac" Unfortunately, try has some pretty annoying performance penalties, which means that if you want a performant parser, you will have to refactor your grammar a bit. I would do that with the above parser this way: Text.Parsec> parse (char 'a' >> (("ab" <$ char 'b') <|> ("ac" <$ char 'c'))) "" "ac" Right "ac" In your case, you will need to factor out the "#" mark similarly.
Parsec
33,057,481
10
I'm writing a programming language which uses Parsec for its parsing. For reporting error messages, I've got each element of my syntax tree labelled with its source location, using the getPosition function from the Pos module of Parsec. However, it only gives the location of the beginning of each expression I parse, and I'd like the beginning and end, so that I can highlight their entire location within the source code. Is such a thing possible with parsec? Is there a standard way of getting the end-point of an expression I'm parsing, so that I can include it in my AST?
You can use getPosition after you parse as well. import Text.Parsec import Text.Parsec.String spanned :: Parser a -> Parser (SourcePos, SourcePos, a) spanned p = do pos1 <- getPosition a <- p pos2 <- getPosition pure (pos1, pos2, a) Testing: > parseTest (spanned (many1 (char 'a'))) "aaaaafff" ((line 1, column 1),(line 1, column 6),"aaaaa")
Parsec
36,078,405
10
I was wondering, if there is a standard, canonical way in Haskell to write not only a parser for a specific file format, but also a writer. In my case, I need to parse a data file for analysis. However, I also simulate data to be analyzed and save it in the same file format. I could now write a parser using Parsec or something equivalent and also write functions that perform the text output in the way that it is needed, but whenever I change my file format, I would have to change two functions in my code. Is there a better way to achieve this goal? Thank you, Dominik
The BNFC-meta package https://hackage.haskell.org/package/BNFC-meta-0.4.0.3 might be what you looking for "Specifically, given a quasi-quoted LBNF grammar (as used by the BNF Converter) it generates (using Template Haskell) a LALR parser and pretty pretty printer for the language." update: found this package that also seems to fulfill the objective (not tested yet) http://hackage.haskell.org/package/syntax
Parsec
45,239,430
10
We are trying to evaluate Keycloak as an SSO solution, and it looks good in many respects, but the documentation is painfully lacking in the basics. For a given Keycloak installation on http://localhost:8080/ for realm test, what are the OAuth2 Authorization Endpoint, OAuth2 Token Endpoint and OpenID Connect UserInfo Endpoint ? We are not interested in using Keycloak's own client library, we want to use standard OAuth2 / OpenID Connect client libraries, as the client applications using the keycloak server will be written in a wide range of languages (PHP, Ruby, Node, Java, C#, Angular). Therefore the examples that use the Keycloak client aren't of use for us.
For Keycloak 1.9 and above, the above information can be retrieved via the url http://keycloakhost:keycloakport/realms/{realm}/.well-known/openid-configuration For example, if the realm name is demo: http://keycloakhost:keycloakport/realms/demo/.well-known/openid-configuration An example output from above url: { "issuer": "http://localhost:8080/realms/demo", "authorization_endpoint": "http://localhost:8080/realms/demo/protocol/openid-connect/auth", "token_endpoint": "http://localhost:8080/realms/demo/protocol/openid-connect/token", "userinfo_endpoint": "http://localhost:8080/realms/demo/protocol/openid-connect/userinfo", "end_session_endpoint": "http://localhost:8080/realms/demo/protocol/openid-connect/logout", "jwks_uri": "http://localhost:8080/realms/demo/protocol/openid-connect/certs", "grant_types_supported": [ "authorization_code", "refresh_token", "password" ], "response_types_supported": [ "code" ], "subject_types_supported": [ "public" ], "id_token_signing_alg_values_supported": [ "RS256" ], "response_modes_supported": [ "query" ] } Found information at https://issues.jboss.org/browse/KEYCLOAK-571 Note: You might need to add your client to the Valid Redirect URI list
Keycloak
28,658,735
210
I want to create a fairly simple role-based access control system using Keycloak's authorization system. The system Keycloak is replacing allows us to create a "user", who is a member of one or more "groups". In this legacy system, a user is given "permission" to access each of about 250 "capabilities" either through group membership (where groups are assigned permissions) or a direct grant of a permission to the user. I would like to map the legacy system to Keycloak authorizations. It should be simple for me to map each "capability" in the existing system to a Keycloak resource and a set of Keycloak scopes. For example, a "viewAccount" capability would obviously map to an "account" resource and a "view" scope; and "viewTransaction" maps to a "transaction" resource... but is it best practice to create just one "view" scope, and use it across multiple resources (account, transaction, etc.)? Or should I create a "viewAccount" scope, a "viewTransaction" scope, etc.? Similarly, I'm a little confused about permissions. For each practical combination of resource and scope, is it usual practice to create a permission? If there are multiple permissions matching a given resource/scope, what does Keycloak do? I'm guessing that the intention of Keycloak is to allow me to configure a matrix of permissions against resources and scopes, so for example I could have permission to access "accounts" and permission for "view" scope, so therefore I would have permission to view accounts? I ask because the result of all this seems to be that my old "viewAccount" capability ends up creating an "Account" resource, with "View" scope, and a "viewAccount" permission, which seems to get me back where I was. Which is fine, if it's correct. Finally, obviously I need a set of policies that determine if viewAccount should be applied. But am I right that this means I need a policy for each of the legacy groups that a user could belong to? For example, if I have a "help desk" role, then I need a "help desk membership" policy, which I could then add to the "viewAccount" permission. Is this correct?
Full transparency—I am by no means a Keycloak, OAuth, or OIDC expert and what I know is mostly from reading the docs, books, good ol' YouTube and playing around with the tool. This post will be comprised of two parts: I'll attempt to answer all your questions to the best of my ability I'll show you all how you can play around with policies/scopes/permissions in Keycloak without needing to deploy a separate app in order to better understand some of the core concepts in this thread. Do note though that this is mostly meant to get you all started. I'm using Keycloak 8.0.0. Part I Some terminology before we get started: In Keycloak, you can create two types of permissions: Resource-Based and Scope-Based. Simply put, for Resource-Based permissions, you apply it directly to your resource For Scoped-Based permission, you apply it to your scope(s) or scope(s) and resource. is it best practice to create just one "view" scope, and use it across multiple resources (account, transaction, etc.)? Or should I create a "viewAccount" scope, a "viewTransaction" scope, etc.? Scopes represent a set of rights at a protected resource. In your case, you have two resources: account and transaction, so I would lean towards the second approach. In the long run, having a global view scope associated with all your resources (e.g., account, transaction, customer, settlement...) makes authorization difficult to both manage and adapt to security requirement changes. Here are a few examples that you can check out to get a feel for design Slack API Box API Stripe Do note though - I am not claiming that you shouldn't share scopes across resources. Matter of fact, Keycloak allows this for resources with the same type. You could for instance need both viewAccount and viewTransaction scope to read a transaction under a given account (after all you might need access to the account to view transactions). Your requirements and standards will heavily influence your design. For each practical combination of resource and scope, is it usual practice to create a permission? Apologies, I don't fully understand the question so I'll be a bit broad. In order to grant/deny access to a resource, you need to: Define your policies Define your permissions Apply your policies to your permissions Associate your permissions to a scope or resource (or both) for policy enforcement to take effect. See Authorization Process. How you go about setting all this up is entirely up to you. You could for instance: Define individual policies, and tie each policy under the appropriate permission. Better yet, define individual policies, then group all your related policies under an aggregated policy (a policy of policies) and then associate that aggregated policy with the scope-based permission. You could have that scoped-based permission apply to both the resource and all its associated scope. Or, you could further break apart your permissions by leveraging the two separate types. You could create permissions solely for your resources via the resource-based permission type, and separately associate other permissions solely with a scope via the scope-based permission type. You have options. If there are multiple permissions matching a given resource/scope, what does Keycloak do? This depends on The resource server's Decision Strategy Each permission's Decision Strategy Each policy's Logic value. The Logic value is similar with Java's ! operator. It can either be Positive or Negative. When the Logic is Positive, the policy's final evaluation remains unchanged. When its Negative, the final result is negated (e.g. if a policy evaluates to false and its Logic is Negative, then it will be true). To keep things simple, let's assume that the Logic is always set to Positive. The Decision Strategy is what we really want to tackle. The Decision Strategy can either be Unanimous or Affirmative. From the docs, Decision Strategy This configurations changes how the policy evaluation engine decides whether or not a resource or scope should be granted based on the outcome from all evaluated permissions. Affirmative means that at least one permission must evaluate to a positive decision in order grant access to a resource and its scopes. Unanimous means that all permissions must evaluate to a positive decision in order for the final decision to be also positive. As an example, if two permissions for a same resource or scope are in conflict (one of them is granting access and the other is denying access), the permission to the resource or scope will be granted if the chosen strategy is Affirmative. Otherwise, a single deny from any permission will also deny access to the resource or scope. Let's use an example to better understand the above. Suppose you have a resource with two permissions and someone is trying to access that resource (remember, the Logic is Positive for all policies). Now: Permission One has a Decision Strategy set to Affirmative. It also has 3 policies where they each evaluate to: true false false Since one of the policies is set to true, Permission One is set to true (Affirmative - only 1 needs to be true). Permission Two has a Decision Strategy set to Unanimous with two policies: true false In this case Permission Two is false since one policy is false (Unanimous - they all need to be true). Now comes the final evaluation. If the resource server's Decision Strategy is set to Affirmative, access to that resource would be granted because Permission One is true. If on the other hand, the resource server's Decision Strategy is set to Unanimous, access would be denied. See: Resource Server Settings Managing Permissions We'll keep revisiting this. I explain how to set the resource sever's Decision Strategy in Part II. so for example I could have permission to access "accounts" and permission for "view" scope, so therefore I would have permission to view accounts? The short answer is yes. Now, let's expand on this a bit :) If you have the following scenario: Resource server's Decision Strategy set to Unanimous or Affirmative Permission to access the account/{id} resource is true Permission to access the view scope is true You will be granted access to view the account. true + true is equal to true under the Affirmative or Unanimous Decision Strategy. Now if you have this Resource server's Decision Strategy set to Affirmative Permission to access the account/{id} resource is true Permission to access the view scope is false You will also be granted access to view the account. true + false is true under the Affirmative strategy. The point here is that access to a given resource also depends on your setup so be careful as you may not want the second scenario. But am I right that this means I need a policy for each of the legacy groups that a user could belong to? I'm not sure how Keycloak behaved two years ago, but you can specify a Group-Based policy and simply add all your groups under that policy. You certainly do not need to create one policy per group. For example, if I have a "help desk" role, then I need a "help desk membership" policy, which I could then add to the "viewAccount" permission. Is this correct? Pretty much. There are many ways you can set this up. For instance, you can: Create your resource (e.g. /account/{id}) and associate it with the account:view scope. create a Role-Based Policy and add the helpdesk role under that policy Create a Scope-Based permission called viewAccount and tie it with scope, resource and policy We'll set up something similar in Part II. Part II Keycloak has a neat little tool which allows you test all your policies. Better yet, you actually do not need to spin up another application server and deploy a separate app for this to work. Here's the scenario that we'll set up: We'll create a new realm called stackoverflow-demo We'll create a bank-api client under that realm We will define a resource called /account/{id} for that client The account/{id} will have the account:view scope We'll create a user called bob under the new realm We'll also create three roles: bank_teller, account_owner and user We will not associate bob with any roles. This is not needed right now. We'll set up the following two Role-Based policies: bank_teller and account_owner have access to the /account/{id} resource account_owner has access to the account:view scope user does not have access to the resource or scope We'll play around with the Evaluate tool to see how access can be granted or denied. Do forgive me, this example is unrealistic but I'm not familiar with the banking sector :) Keycloak setup Download and run Keycloak cd tmp wget https://downloads.jboss.org/keycloak/8.0.0/keycloak-8.0.0.zip unzip keycloak-8.0.0.zip cd keycloak-8.0.0/bin ./standalone.sh Create initial admin user Go to http://localhost:8080/auth Click on the Administration Console link Create the admin user and login Visit Getting Started for more information. For our purposes, the above is enough. Setting up the stage Create a new realm Hover your mouse around the master realm and click on the Add Realm button. Enter stackoverflow-demo as the name. Click on Create. The top left should now say stackoverflow-demo instead of the master realm. See Creating a New Realm Create a new user Click on the Users link on the left Click on the Add User button Enter the username (e.g., bob) Ensure that User Enabled is turned on Click Save See Creating a New User Create new roles Click on the Roles link Click on Add Role Add the following roles: bank_teller, account_owner and user Again, do not associate your user with the roles. For our purposes, this is not needed. See Roles Create a client Click on the Clients link Click on Create Enter bank-api for the Client ID For the Root URL enter http://127.0.0.1:8080/bank-api Click on Save Ensure that Client Protocol is openid-connect Change the Access Type to confidential Change Authorization Enabled to On Scroll down and hit Save. A new Authorization tab should appear at the top. Click on the Authorization tab and then Settings Ensure that the Decision Strategy is set to Unanimous This is the resource server's Decision Strategy See: Creating a Client Application Enabling Authorization Services Create Custom Scopes Click on the Authorization tab Click on Authorization Scopes > Create to bring up Add Scope page Enter account:view in the name and hit enter. Create "View Account Resource" Click on Authorization link above Click on Resources Click on Create Enter View Account Resource for both the Name and Display name Enter account/{id} for the URI Enter account:view in the Scopes textbox Click Save See Creating Resources Create your policies Again under the Authorization tab, click on Policies Select Role from the the Create Policy dropdown In the Name section, type Only Bank Teller and Account Owner Policy Under Realm Roles select both the bank_teller and account_owner role Ensure that Logic is set to Positive Click Save Click on the Policies link Select Role again from the Create Policy dropdown. This time use Only Account Owner Policy for the Name Under Realm Roles select account_owner Ensure that Logic is set to Positive Click Save Click on the Policies link at the top, you should now see your newly created policies. See Role-Based Policy Do note that Keycloak has much more powerful policies. See Managing Policies Create Resource-Based Permission Again under the Authorization tab, click on Permissions Select Resource-Based Type View Account Resource Permission for the Name Under Resources select View Account Resource Under Apply Policy select Only Bank Teller and Account Owner Policy Ensure that the Decision Strategy is set to Unanimous Click Save See Create Resource-Based Permissions Phew... Evaluating the Resource-Based permission Again under the Authorization tab, select Evaluate Under User enter bob Under Roles select user This is where we will associate our user with our created roles. Under Resources select View Account Resource and click Add Click on Evaluate. Expand the View Account Resource with scopes [account:view] to see the results and you should see DENY. This makes sense because we only allow two roles access to that resource via the Only Bank Teller and Account Owner Policy. Let's test this to make sure this is true! Click on the Back link right above the evaluation result Change bob's role to account_owner and click on Evaluate. You should now see the result as PERMIT. Same deal if you go back and change the role to bank_teller See Evaluating and Testing Policies Create Scope-Based Permission Go back to the Permissions section Select Scope-Based this time under the Create Permission dropdown. Under Name, enter View Account Scope Permission Under Scopes, enter account:view Under Apply Policy, enter Only Account Owner Policy Ensure that the Decision Strategy is set to Unanimous Click Save See Creating Scope-Based Permissions Second test run Evaluating our new changes Go back to the Authorization section Click on Evaluate User should be bob Roles should be bank_teller Resources should be View Account Resource and click Add Click on Evaluate and we should get DENY. Again this should come as no surprise as the bank_teller has access to the resource but not the scope. Here one permission evaluates to true, and the other to false. Given that the resource server's Decision Strategy is set to Unanimous, the final decision is DENY. Click on Settings under the Authorization tab, and change the Decision Strategy to Affirmative and go back to steps 1-6 again. This time, the final result should be PERMIT (one permission is true, so final decision is true). For the sake of completeness, turn the resource server's Decision Strategy back to Unanimous. Again, go back to steps 1 through 6 but this time, set the role as account_owner. This time, the final result is again PERMIT which makes sense, given that the account_owner has access to both the resource and scope.
Keycloak
42,186,537
210
Does keycloak client id has a client secret? I tried to create a client in keycloak admin but I was not able to spot client secret. Is it auto generated? Where can I get the secret?
Your client need to have the access-type set to confidential , then you will have a new tab credentials where you will see the client secret. https://wjw465150.gitbooks.io/keycloak-documentation/content/server_admin/topics/clients/oidc/confidential.html
Keycloak
44,752,273
140
Keycloak refresh token lifetime is 1800 seconds: "refresh_expires_in": 1800 How to specify different expiration time? In Keycloak admin UI, only access token lifespan can be specified:
As pointed out in the comments by @Kuba Šimonovský the accepted answer is missing other important factors: Actually, it is much much much more complicated. TL;DR One can infer that the refresh token lifespan will be equal to the smallest value among (SSO Session Idle, Client Session Idle, SSO Session Max, and Client Session Max). After having spent some time looking into this, and now looking back at this thread, I feel that the previous answers felt short to explain in detail what is going on (one might even argue that they are wrong actually). Let us assume for now that we only have SSO Session Idle and SSO Session Max: and SSO Session Max > SSO Session Idle in this case the refresh token lifetime is the same as SSO Session Idle. Why? because if the application is idle for SSO Session Idle time the user gets logout and that is why the refresh token is bound to that value. Whenever the application requests a new token, both the refresh token lifetime and SSO Session Idle countdown values will be reset again; and SSO Session Max <= SSO Session Idle then the refresh token lifetime will be the same as SSO Session Max. Why? because regardless of what the user does (i.e., idle or not) the user gets logout after SSO Session Max time, and thus why the refresh token is bound to that value. From here we conclude that the refresh token lifespan is bound to the lowest of the two values SSO Session Idle and SSO Session Max. Both those values are related to Single Sign-ON (SSO). We still need to consider the values of the Client Session Idle and Client Session Max fields of the realm settings, which when NOT set are the same as SSO Session Idle and SSO Session Max, respectively. If those values are set, in the context of the refresh token, they will override the values from SSO Session Idle and SSO Session Max, BUT only if they are lower than the values from SSO Session Idle and SSO Session Max. Let us see the following examples: SSO Session Idle = 1800 seconds, SSO Session Max = 10 hours and: Client Session Idle = 600 seconds and Client Session Max = 1 hour. In this case, the refresh token lifespan is the same as Client Session Idle; Client Session Idle = 600 seconds and Client Session Max = 60 seconds. In this case, the refresh token lifespan is the same as Client Session Max. Client Session Idle = 1 day and Client Session Max = 10 Days. In this case, the refresh token lifespan is the same as SSO Session Idle; So in short you can infer that refresh token lifespan will be equal to the smallest value between (SSO Session Idle, Client Session Idle, SSO Session Max, and Client Session Max). So the claim from previous answers that you can simply use the Client Session Max to control the refresh token lifespan is FALSE. One just needs to look at the previous examples 1) and 3). Finally, the fields Client Session Idle and Client Session Max from the realm settings can be overwritten by the Client Session Idle and Client Session Max in the clients themselves, which will affect the refresh token lifespan for that client in particular. The same logic applies but instead of considering the values Client Session Idle and Client Session Max from the realm settings one needs to consider those from the client advance settings.
Keycloak
52,040,265
99
I need to make the user keep login in the system if the user's access_token get expired and user want to keep login. How can I get newly updated access_token with the use of refresh_token on Keycloak? I am using vertx-auth for the auth implementation with Keycloak on vert.x. Is it possible to refresh access_token with vertx-auth or Keycloak's REST API itself? Or what will be another implementation of this?
keycloak has REST API for creating an access_token using refresh_token. It is a POST endpoint with application/x-www-form-urlencoded Here is how it looks: Method: POST URL: https://keycloak.example.com/auth/realms/myrealm/protocol/openid-connect/token Body type: x-www-form-urlencoded Form fields: client_id : <my-client-name> grant_type : refresh_token refresh_token: <my-refresh-token> This will give you new access token using refresh token. NOTE: if your refresh token is expired it will throw 400 exception in that you can make user login again. Check out a sample in Postman, you can develop and corresponding API using this.
Keycloak
51,386,337
77
I have keycloak standalone running on my local machine. I created new realm called 'spring-test', then new client called 'login-app' According to the rest documentation: POST: http://localhost:8080/auth/realms/spring-test/protocol/openid-connect/token { "client_id": "login-app", "username": "user123", "password": "pass123", "grant_type": "password" } should give me the jwt token but I get bad request with response { "error": "invalid_request", "error_description": "Missing form parameter: grant_type" } I am assuming that something is missing in my configuration. EDIT: I was using json body but it should be application/x-www-form-urlencoded: the following body works: token_type_hint:access_token&token:{token}&client_id:{client_id}&client_secret:{client_secret}
You should send your data in a POST request with Content-Type header value set to application/x-www-form-urlencoded, not json.
Keycloak
53,795,179
74
I have issue while calling Keycloak's logout endpoint from an (mobile) application. This scenario is supported as stated in its documentation: /realms/{realm-name}/protocol/openid-connect/logout The logout endpoint logs out the authenticated user. The user agent can be redirected to the endpoint, in which case the active user session is logged out. Afterward the user agent is redirected back to the application. The endpoint can also be invoked directly by the application. To invoke this endpoint directly the refresh token needs to be included as well as the credentials required to authenticate the client. My request has following format: POST http://localhost:8080/auth/realms/<my_realm>/protocol/openid-connect/logout Authorization: Bearer <access_token> Content-Type: application/x-www-form-urlencoded refresh_token=<refresh_token> but this error always occurs: HTTP/1.1 400 Bad Request Connection: keep-alive X-Powered-By: Undertow/1 Server: WildFly/10 Content-Type: application/json Content-Length: 123 Date: Wed, 11 Oct 2017 12:47:08 GMT { "error": "unauthorized_client", "error_description": "UNKNOWN_CLIENT: Client was not identified by any client authenticator" } It seems that Keycloak is unable to detect the current client's identity event if I've provided access_token. I've the used same access_token to access other Keycloak's APIs without any problems, like userinfo (/auth/realms/<my_realm>/protocol/openid-connect/userinfo). My request was based on this Keycloak's issue. The author of the issue got it worked but it is not my case. I'm using Keycloak 3.2.1.Final. Do you have that same problem? Have you got any idea how to solve it?
Finally, I've found the solution by looking at the Keycloak's source code: https://github.com/keycloak/keycloak/blob/9cbc335b68718443704854b1e758f8335b06c242/services/src/main/java/org/keycloak/protocol/oidc/endpoints/LogoutEndpoint.java#L169. It says: If the client is a public client, then you must include a "client_id" form parameter. So what I was missing is the client_id form parameter. My request should have been: POST http://localhost:8080/auth/realms/<my_realm>/protocol/openid-connect/logout Authorization: Bearer <access_token> Content-Type: application/x-www-form-urlencoded client_id=<my_client_id>&refresh_token=<refresh_token> The session should be destroyed correctly.
Keycloak
46,689,034
66
In my rest service i can obtain the principal information after authentication using KeycloakPrincipal kcPrincipal = (KeycloakPrincipal) servletRequest.getUserPrincipal(); statement. Keycloak principal doesn't contain all the information i need about the authenticated user. Is it possible to customize my own principal type? On the keycloak-server-end I've developed a user federation provider. I saw that UserModel makes possible to add a set of custom attributes to my user. Is it possible to insert my custom principal in that code? Is it possible to retrieve this attributes from keycloak principal? What is the way?
To add custom attributes you need to do three things: Add attributes to admin console Add claim mapping Access claims The first one is explained pretty good here: https://www.keycloak.org/docs/latest/server_admin/index.html#user-attributes Add claim mapping: Open the admin console of your realm. Go to Clients and open your client This only works for Settings > Access Type confidential or public (not bearer-only) Go to Mappers Create a mapping from your attribute to json Check "Add to ID token" Access claims: final Principal userPrincipal = httpRequest.getUserPrincipal(); if (userPrincipal instanceof KeycloakPrincipal) { KeycloakPrincipal<KeycloakSecurityContext> kp = (KeycloakPrincipal<KeycloakSecurityContext>) userPrincipal; IDToken token = kp.getKeycloakSecurityContext().getIdToken(); Map<String, Object> otherClaims = token.getOtherClaims(); if (otherClaims.containsKey("YOUR_CLAIM_KEY")) { yourClaim = String.valueOf(otherClaims.get("YOUR_CLAIM_KEY")); } } else { throw new RuntimeException(...); } I used this for a custom attribute I added with a custom theme.
Keycloak
32,678,883
54
so I have a problem getting keycloak 3.2.1 to work behind kong (0.10.3), a reverse proxy based on nginx. Scenario is: I call keycloak via my gateway-route via https://{gateway}/auth and it shows me the entrypoint with keycloak logo, link to admin console etc. - so far so good. But when clicking on administration console -> calling https://{gateway}/auth/admin/master/console/ , keycloak tries to load its css/js via http (see screenie below), which my browser blocks because mixed content. I searched around and found this thread: keycloak apache server configuration with 'Mixed Content' problems which lead to this github repo: https://github.com/dukecon/keycloak_postgres_https From there on, I tried to integrate its' cli into my dockerfile with success (did not change the files' contents, just copied them into my repo and add/run them from dockerfile). This is my dockerfile right now: FROM jboss/keycloak-postgres:3.2.1.Final USER root ADD config.sh /tmp/ ADD batch.cli /tmp/ RUN bash /tmp/config.sh #Give correct permissions when used in an OpenShift environment. RUN chown -R jboss:0 $JBOSS_HOME/standalone && \ chmod -R g+rw $JBOSS_HOME/standalone USER jboss EXPOSE 8080 Sadly, my problem still exists: So I am out of ideas for now and hope you could help me out: How do I tell keycloak to call its' css-files via https here? do I have to change something in the cli script? Here's the content of the script: config.sh: #!/bin/bash -x set -e JBOSS_HOME=/opt/jboss/keycloak JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh JBOSS_MODE=${1:-"standalone"} JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"} echo "==> Executing..." cd /tmp $JBOSS_CLI --file=`dirname "$0"`/batch.cli # cf. http://stackoverflow.com/questions/34494022/permissions-error-when-using-cli-in-jboss-wildfly-and-docker /bin/rm -rf ${JBOSS_HOME}/${JBOSS_MODE}/configuration/${JBOSS_MODE}_xml_history/current and batch.cli: embed-server --std-out=echo # http://keycloak.github.io/docs/userguide/keycloak-server/html/server-installation.html # 3.2.7.2. Enable SSL on a Reverse Proxy # First add proxy-address-forwarding and redirect-socket to the http-listener element. # Then add a new socket-binding element to the socket-binding-group element. batch /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding,value=true) /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=redirect-socket,value=proxy-https) /socket-binding-group=standard-sockets/socket-binding=proxy-https:add(port=443) run-batch stop-embedded-server It may be of interest too, that kong is deployed on openshift with a route using a redirect from http to https ( "insecureEdgeTerminationPolicy": "Redirect" ).
This sounds somehow like a duplicate of Keycloak Docker behind loadbalancer with https fails Set the request headers X-Forwarded-For and X-Forwarded-Proto in nginx. Then you have to configure Keycloak (Wildfly, Undertow) to work together with the SSL terminating reverse proxy (aka load balancer). See http://www.keycloak.org/docs/latest/server_installation/index.html#_setting-up-a-load-balancer-or-proxy for a detailed description. The point is that nginx is terminating SSL and is forwarding the requests to Keycloak as pure http. Therefore Keycloak/Wildfly must be told that the incoming http requests from nginx must be handled like they were https.
Keycloak
47,181,821
54
Keycloak is a great tool, but it lacks proper documentation. So we have Realm.roles, Client.roles and User.roles How do there 3 work together when accessing an application using a specific client? Sincerely,
In KeyCloak we have those 3 roles: Realm Role Client Role Composite Role There are no User Roles in KeyCloak. You most likely confused that with User Role Mapping, which is basically mapping a role (realm, client, or composite) to the specific user In order to find out how these roles actually work, let's first take a look at a simple Realm model I created. As you can see in picture below, every Realm has one or multiple Clients. And every Client can have multiple Users attached to it. Now from this it should be easy to conclude how role mappings work. Realm Role: It is a global role, belonging to that specific realm. You can access it from any client and map to any user. Ex Role: 'Global Admin, Admin' Client Role: It is a role which belongs only to that specific client. You cannot access that role from a different client. You can only map it to the Users from that client. Ex Roles: 'Employee, Customer' Composite Role: It is a role that has one or more roles (realm or client ones) associated to it.
Keycloak
47,837,613
52
I have a client in keycloak for my awx(ansible tower) webpage. I need only the users from one specific keycloak group to be able to log in through this client. How can I forbid all other users(except from one particular group) from using this keycloak client?
I found a solution which does not require the scripts extension or any changes on the flow. The key for this solution are the Client Scopes. An application which wants to to authorize a user needs a scope like email or uid, right? What if you only pass them to an application if a user is in a specific group? In the following, my client application name is App1. Solution: Go to your client roles (realm -> Clients -> click App1 -> Roles) Click 'Add Role' -> enter Name (e.g. 'access') -> click 'Save' Go to Client Scopes (realm -> Client Scopes) Click on the scope which is needed by your client application (e.g. 'email') Assign Client Role 'access' in 'Scope' Tab by choosing client application 'App1' in Drop Down 'Client Roles' Now, you won't be able to log into your client application App1 anymore, as the role 'access' is not assigned to any user or group. You can try. Let's create a new group and assign the role and a user to it. Create Group (realm -> Groups -> Click 'New' -> enter Name 'App1 Users' -> Click Save) In the Group, choose 'Role Mappings', choose 'App1' in Client Roles drop down, and assign the role 'access' Assign User to 'App1 Users' (realm -> Users -> Click on User -> Groups -> Select 'App1 Users -> Click Join) Voila, the chosen user can log into App1.
Keycloak
54,305,880
52
What is the correct way to set the aud claim to avoid the error below? unable to verify the id token {"error": "oidc: JWT claims invalid: invalid claims, 'aud' claim and 'client_id' do not match, aud=account, client_id=webapp"} I kinda worked around this error message by hardcoding aud claim to be the same as my client_id. Is there any better way? Here is my docker-compose.yml: version: '3' services: keycloak-proxy: image: "keycloak/keycloak-gatekeeper" environment: - PROXY_LISTEN=0.0.0.0:3000 - PROXY_DISCOVERY_URL=http://keycloak.example.com:8181/auth/realms/realmcom - PROXY_CLIENT_ID=webapp - PROXY_CLIENT_SECRET=0b57186c-e939-48ff-aa17-cfd3e361f65e - PROXY_UPSTREAM_URL=http://test-server:8000 ports: - "8282:3000" command: - "--verbose" - "--enable-refresh-tokens=true" - "--enable-default-deny=true" - "--resources=uri=/*" - "--enable-session-cookies=true" - "--encryption-key=AgXa7xRcoClDEU0ZDSH4X0XhL5Qy2Z2j" test-server: image: "test-server"
With recent keycloak version 4.6.0 the client id is apparently no longer automatically added to the audience field 'aud' of the access token. Therefore even though the login succeeds the client rejects the user. To fix this you need to configure the audience for your clients (compare doc [2]). Configure audience in Keycloak Add realm or configure existing Add client my-app or use existing Goto to the newly added "Client Scopes" menu [1] Add Client scope 'good-service' Within the settings of the 'good-service' goto Mappers tab Create Protocol Mapper 'my-app-audience' Name: my-app-audience Choose Mapper type: Audience Included Client Audience: my-app Add to access token: on Configure client my-app in the "Clients" menu Client Scopes tab in my-app settings Add available client scopes "good-service" to assigned default client scopes If you have more than one client repeat the steps for the other clients as well and add the good-service scope. The intention behind this is to isolate client access. The issued access token will only be valid for the intended audience. This is thoroughly described in Keycloak's documentation [1,2]. Links to recent master version of keycloak documentation: [1] https://github.com/keycloak/keycloak-documentation/blob/master/server_admin/topics/clients/client-scopes.adoc [2] https://github.com/keycloak/keycloak-documentation/blob/master/server_admin/topics/clients/oidc/audience.adoc Links with git tag: [1] https://github.com/keycloak/keycloak-documentation/blob/f490e1fba7445542c2db0b4202647330ddcdae53/server_admin/topics/clients/oidc/audience.adoc [2] https://github.com/keycloak/keycloak-documentation/blob/5e340356e76a8ef917ef3bfc2e548915f527d093/server_admin/topics/clients/client-scopes.adoc
Keycloak
53,550,321
50
There is an Endpoint to a backend server which gives a JSON response on pinging and is protected by an Apigee Edge Proxy. Currently, this endpoint has no security and we want to implement Bearer only token authentication for all the clients making the request. All the clients making the requests to API will send that JWT token in Authorization Bearer and Apigee Edge will be used to verify the JWT Token. How do I use Keycloak to generate this JWT token? Also, Apigee needs a public key of the origin of the JWT token (the server which signed the JWT token, in this case, I believe that is Keycloak). So my second doubt is, while I use Keycloak to generate the JWT token, how to get the public key using which the server will verify if the token is valid?
This got figured out with the help of this medium article. All the steps I have mentioned below have a detailed description in the article (Refer step 1 to 9 for token part, other steps are related to Spring Boot application) but I would like to give a overview of those in reference to my question. Generating a JWT token using KeyCloak Install and run KeyCloak server and go to the endpoint (e.g http://localhost:8080/auth). Log in with an initial admin login and password (username=admin, password=admin). Create a Realm and a Client with openid-connect as the Client Protocol. Create users, roles and map Client Role To User. Assuming the server being on localhost, visiting the http://localhost:8080/auth/realms/dev/.well-known/openid-configuration gives details about all security endpoints http://localhost:8080/auth/realms/dev/protocol/openid-connect/token sending a POST request with valid details to this URL gives the JWTtoken with. Getting the public key of the KeyCloak server Going to Realm Settings and click on Public key pops up with the Public key of the server for that Realm. Refer to this image for better understanding. Add -----BEGIN PUBLIC KEY----- and append -----END PUBLIC KEY----- to this copied public key to use it anywhere to verify the JWTtoken. You public key should finally look something like this: -----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAhAj9OCZd0XjzOIad2VbUPSMoVK1X8hdD2Ad+jUXCzhZJf0RaN6B+79AW5jSgceAgyAtLXiBayLlaqSjZM6oyti9gc2M2BXzoDKLye+Tgpftd72Zreb4HpwKGpVrJ3H3Ip5DNLSD4a1ovAJ6Sahjb8z34T8c1OCnf5j70Y7i9t3y/j076XIUU4vWpAhI9LRAOkSLqDUE5L/ZdPmwTgK91Dy1fxUQ4d02Ly4MTwV2+4OaEHhIfDSvakLBeg4jLGOSxLY0y38DocYzMXe0exJXkLxqHKMznpgGrbps0TPfSK0c3q2PxQLczCD3n63HxbN8U9FPyGeMrz59PPpkwIDAQAB -----END PUBLIC KEY----- Validating the token on a third party platform jwt.io is a great website for validating JWTtokens. All we have to do is paste the token and public key. Read the introduction of the website here to know more about validating the tokens.
Keycloak
54,884,938
49
I am calling /auth/realms/master/protocol/openid-connect/token to get access token by sending below content in body, grant_type=password&client_id=example-docker-jaxrs-app&username=user&password=password&client_secret=1d27aedd-11c2-4ed2-97d5-c586e1f9b3cd but when I put update password as required action to user from keycloak admin console getting following error when try to get token by above mentioned api, { "error": "invalid_grant", "error_description": "Account is not fully set up" } one more thing, What is difference in 2 setting, Temporary password and Update password Required action ?
If you mark the password as temporary a user action to update password is marked as required. And until the password has been updated/set by the user i.e. this action has been completed, you won't be able to get an access token using this user since the account is not "fully setup" and is in a kind of intermediate state where an action is required to complete the setup.
Keycloak
42,524,153
48
I'm using the Keycloak authorization server in order to manage my application permissions. However, I've found out the standalone server can be accessed locally only. http://localhost:8080/auth works, but not it does http://myhostname:8080/auth. This issue doesn't permit accessing the server from the internal network.
The standalone Keycloak server runs on the top of a JBoss Wildfly instance and this server doesn't allow accessing it externally by default, for security reasons (it should be only for the administration console, but seems to affect every url in case of Keycloak). It has to be booted with the -b=0.0.0.0 option to enable it. However, if your Wildfly is running on a remote machine and you try to access your administrative page through the network by it’s IP address or hostname, let’s say, at http://54.94.240.170:8080/, you will probably see a graceful This webpage is not available error, in another words, Wildfly said “No, thanks, I’m not allowing requests from another guys than the ones at my local machine”. See also: Enable Wildfly remote access Wildfly remotely access administration console doesnt work
Keycloak
34,410,707
46
I Have integrated keycloak with an angular app. Basically, both frontend and backend are on different server.Backend app is running on apache tomcat 8. Frontend app is running on JBoss welcome content folder. Angular config angular.element(document).ready(function ($http) { var keycloakAuth = new Keycloak('keycloak.json'); auth.loggedIn = false; keycloakAuth.init({ onLoad: 'login-required' }).success(function () { keycloakAuth.loadUserInfo().success(function (userInfo) { console.log(userInfo); }); auth.loggedIn = true; auth.authz = keycloakAuth; auth.logoutUrl = keycloakAuth.authServerUrl + "/realms/app1/protocol/openid-connect/logout?redirect_uri=http://35.154.214.8/hrms-keycloak/index.html"; module.factory('Auth', function() { return auth; }); angular.bootstrap(document, ["themesApp"]); }).error(function () { window.location.reload(); }); }); module.factory('authInterceptor', function($q, Auth) { return { request: function (config) { var deferred = $q.defer(); if (Auth.authz.token) { Auth.authz.updateToken(5).success(function() { config.headers = config.headers || {}; config.headers.Authorization = 'Bearer ' + Auth.authz.token; deferred.resolve(config); }).error(function() { deferred.reject('Failed to refresh token'); }); } return deferred.promise; } }; }); module.config(["$httpProvider", function ($httpProvider) { $httpProvider.interceptors.push('authInterceptor'); }]); Request Header Accept:*/* Accept-Encoding:gzip, deflate Accept-Language:en-US,en;q=0.8 Access-Control-Request-Headers:authorization Access-Control-Request-Method:GET Connection:keep-alive Host:35.154.214.8:8080 Origin:http://35.154.214.8 Referer:http://35.154.214.8/accounts-keycloak/ User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36 Error on web console. XMLHttpRequest cannot load http://35.154.214.8:8080/company/loadCurrencyList. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://35.154.214.8' is therefore not allowed access. Cors filter on backend @Component public class CORSFilter implements Filter { static Logger logger = LoggerFactory.getLogger(CORSFilter.class); @Override public void init(FilterConfig filterConfig) throws ServletException { } @Override public void doFilter(ServletRequest request, ServletResponse res, FilterChain chain) throws IOException, ServletException { HttpServletResponse response = (HttpServletResponse) res; response.setHeader("Access-Control-Allow-Origin", "*"); response.setHeader("Access-Control-Allow-Methods", "*"); response.setHeader("Access-Control-Max-Age", "3600"); response.setHeader("Access-Control-Allow-Headers", "*"); chain.doFilter(request, response); } public void destroy() { } }
I was fighting with KeyCloak and CORS and all of this for about two weeks, and this is my solution (for keycloak 3.2.1): Its all about configuring KeyCloak server. It seems to be, that WebOrigin of your Realm needs to be * Only one origin "*". Thats all, what was needed for me. If you enter your server as WebOrigin, the trouble begins. When you call keycloak.init in JavaScript, keycloak does not generate CORS headers, so you have to configure them manually, and as soon as you do so, and call keycloak.getUserInfo after successful init - you get double CORS headers, which is not allowed. Somewhere deep inside of keycloak mailing lists is stated, that you need to set enable-cors=true in your keycloak.json, but there is nothing about that on keycloak.gitbooks.io. So it seems not to be true. They also don't mention CORS when describing JavaScript and Node.Js adapters, and I don't know why, seems not to be important at all. It also seems to be, that you should not touch WildFly configuration to provide CORS headers. Besides, CORS in OIDC is a special KeyCloak feature (and not a bug). Hopefully this answer serves you well.
Keycloak
45,051,923
46
How can I set the docker keycloak base url as parameter ? I have the following nginx reverse proxy configuration: server { listen 80; server_name example.com; location /keycloak { proxy_pass http://example.com:8087/; } } When I try to access http://example.com/keycloak/ I got a keycloak http redirect to http://example.com/auth/ instead of http://example.com/keycloak/auth/ Any ideas?
Just tested that @home, and actually multiple configuration additions are needed: 1/ Run the keycloak container with env -e PROXY_ADDRESS_FORWARDING=true as explained in the docs, this is required in a proxy way of accessing to keycloak: docker run -it --rm -p 8087:8080 --name keycloak -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak:latest Also explained in this SO question 2/ Change the web-context inside keycloak's configuration file $JBOSS_HOME/standalone/configuration/standalone.xml Default keycloak configuration points to auth <web-context>auth</web-context> Then you could change it to keycloak/auth <web-context>keycloak/auth</web-context> If you need to automate this for docker, just create a new keycloak image : FROM jboss/keycloak:latest USER jboss RUN sed -i -e 's/<web-context>auth<\/web-context>/<web-context>keycloak\/auth<\/web-context>/' $JBOSS_HOME/standalone/configuration/standalone.xml 3/ Add some proxy information to nginx configuration (mostly for http / https handling) location /keycloak { proxy_pass http://example.com:8087; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } If you are proxying requests from nginx to keycloak on same server, I recommend using proxy_pass http://localhost:8087;, and if not try to use a private network to avoid proxying through external web requests. Hope this helps
Keycloak
44,624,844
44
I want to secure my Spring Boot 2.1 app with Keycloak 4.5. Currently I cannot start the application due to the following error: Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.support.BeanDefinitionOverrideException: Invalid bean definition with name 'httpSessionManager' defined in class path resource [dummy/service/SecurityConfig.class]: Cannot register bean definition [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=securityConfig; factoryMethodName=httpSessionManager; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [dummy/SecurityConfig.class]] for bean 'httpSessionManager': There is already [Generic bean: class [org.keycloak.adapters.springsecurity.management.HttpSessionManager]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in URL [jar:file:/.m2/repository/org/keycloak/keycloak-spring-security-adapter/4.5.0.Final/keycloak-spring-security-adapter-4.5.0.Final.jar!/org/keycloak/adapters/springsecurity/management/HttpSessionManager.class]] bound. My class SecurityConfig (see below) extends from KeycloakWebSecurityConfigurerAdapter. This adapter already defines the bean httpSessionManager. I understand why this is a problem. Question is, how can I prevent this or fix my conflict? The Steps I have done so far: Built my pom (see below) using: spring-boot-starter-web spring-boot-starter-security keycloak-spring-boot-starter keycloak-adapter-bom in dependencyManagement Defined an own SecurityConfig extending KeycloakWebSecurityConfigurerAdapter pom.xml ... <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.0.RELEASE</version> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>11</java.version> <maven.compiler.source>${java.version}</maven.compiler.source> <maven.compiler.target>${java.version}</maven.compiler.target> <keycloak.version>4.5.0.Final</keycloak.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-spring-boot-starter</artifactId> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.keycloak.bom</groupId> <artifactId>keycloak-adapter-bom</artifactId> <version>${keycloak.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ... SecurityConfig.java @KeycloakConfiguration @EnableGlobalMethodSecurity(prePostEnabled = true) @Import(KeycloakWebSecurityConfigurerAdapter.class) class SecurityConfig extends KeycloakWebSecurityConfigurerAdapter { @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) { KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider(); keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(new SimpleAuthorityMapper()); auth.authenticationProvider(keycloakAuthenticationProvider); } @Bean @Override protected SessionAuthenticationStrategy sessionAuthenticationStrategy() { return new RegisterSessionAuthenticationStrategy(new SessionRegistryImpl()); } @Bean public KeycloakConfigResolver keycloakConfigResolver() { return new KeycloakSpringBootConfigResolver(); } @Override protected void configure(HttpSecurity http) throws Exception { super.configure(http); http.csrf().ignoringAntMatchers("/**/*"); http.authorizeRequests() .anyRequest().permitAll(); } } Update There is a known issue (KEYCLOAK-8725). The fix is planned for 5.x of Keycloak. However, there was a workaround in the comments. Just replace the annotation @KeyCloakConfiguration with: @Configuration @ComponentScan( basePackageClasses = KeycloakSecurityComponents.class, excludeFilters = @ComponentScan.Filter(type = FilterType.REGEX, pattern = "org.keycloak.adapters.springsecurity.management.HttpSessionManager")) @EnableWebSecurity
This helped me to resolve an issue, remove @KeycloakConfiguration and use this instead (from KEYCLOAK-8725): Java: @Configuration @ComponentScan( basePackageClasses = KeycloakSecurityComponents.class, excludeFilters = @ComponentScan.Filter(type = FilterType.REGEX, pattern = "org.keycloak.adapters.springsecurity.management.HttpSessionManager")) @EnableWebSecurity Kotlin: @Configuration @ComponentScan( basePackageClasses = [KeycloakSecurityComponents::class], excludeFilters = [ComponentScan.Filter(type = FilterType.REGEX, pattern = ["org.keycloak.adapters.springsecurity.management.HttpSessionManager"])] ) @EnableWebSecurity
Keycloak
53,318,134
42
I am going to secure my golang application using keycloak, but keycloak itself does not support go language. There are some go adaptor as an open project in github that has implemented openId connect protocol as a provider service, but they do not provide an example or documentation on how to integrate libraries with an application. How can i interact with keycloak using golang?
As you have pointed out, there is no official keycloak adapter for golang. But it is pretty straight forward to implement it. Here is a little walk through. Keycloak server For this example, I will use the official keycloak docker image to start the server. The version used is 4.1.0.Final. I think this will work with older KeyCloak versions too though. docker run -d -p 8080:8080 -e KEYCLOAK_USER=keycloak -e KEYCLOAK_PASSWORD=k --name keycloak jboss/keycloak:4.1.0.Final After the server is up and running, you can open localhost:8080/auth in your browser, navigate to the administration console and login with username keycloak and k as the corresponding password. I will not go through the complete process of creating a realm/clients/users. You can look this up under https://www.keycloak.org/docs/latest/server_admin/index.html#admin-console Here is just an outline for what I did to reproduce this example: create a realm named demo turn off the requirement of ssl for this realm (realmsettings -> login -> require ssl) create a client named demo-client (change the "Access Type" to confidential) create a user named demo with password demo (users -> add user). Make sure to activate and impersonate this user. configure the demo-client to be confidential and use http://localhost:8181/demo/callback as a valid redirect URI. The resulting keycloak.json (obtained from the installation tab) looks like this: { "realm": "demo", "auth-server-url": "http://localhost:8080/auth", "ssl-required": "none", "resource": "demo-client", "credentials": { "secret": "cbfd6e04-a51c-4982-a25b-7aaba4f30c81" }, "confidential-port": 0 } Beware that your secret will be different though. The Go Server Let's go over to the go server. I use the github.com/coreos/go-oidc package for the heavy lifting: package main import ( "context" "encoding/json" "log" "net/http" "strings" oidc "github.com/coreos/go-oidc" "golang.org/x/oauth2" ) func main() { configURL := "http://localhost:8080/auth/realms/demo" ctx := context.Background() provider, err := oidc.NewProvider(ctx, configURL) if err != nil { panic(err) } clientID := "demo-client" clientSecret := "cbfd6e04-a51c-4982-a25b-7aaba4f30c81" redirectURL := "http://localhost:8181/demo/callback" // Configure an OpenID Connect aware OAuth2 client. oauth2Config := oauth2.Config{ ClientID: clientID, ClientSecret: clientSecret, RedirectURL: redirectURL, // Discovery returns the OAuth2 endpoints. Endpoint: provider.Endpoint(), // "openid" is a required scope for OpenID Connect flows. Scopes: []string{oidc.ScopeOpenID, "profile", "email"}, } state := "somestate" oidcConfig := &oidc.Config{ ClientID: clientID, } verifier := provider.Verifier(oidcConfig) http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { rawAccessToken := r.Header.Get("Authorization") if rawAccessToken == "" { http.Redirect(w, r, oauth2Config.AuthCodeURL(state), http.StatusFound) return } parts := strings.Split(rawAccessToken, " ") if len(parts) != 2 { w.WriteHeader(400) return } _, err := verifier.Verify(ctx, parts[1]) if err != nil { http.Redirect(w, r, oauth2Config.AuthCodeURL(state), http.StatusFound) return } w.Write([]byte("hello world")) }) http.HandleFunc("/demo/callback", func(w http.ResponseWriter, r *http.Request) { if r.URL.Query().Get("state") != state { http.Error(w, "state did not match", http.StatusBadRequest) return } oauth2Token, err := oauth2Config.Exchange(ctx, r.URL.Query().Get("code")) if err != nil { http.Error(w, "Failed to exchange token: "+err.Error(), http.StatusInternalServerError) return } rawIDToken, ok := oauth2Token.Extra("id_token").(string) if !ok { http.Error(w, "No id_token field in oauth2 token.", http.StatusInternalServerError) return } idToken, err := verifier.Verify(ctx, rawIDToken) if err != nil { http.Error(w, "Failed to verify ID Token: "+err.Error(), http.StatusInternalServerError) return } resp := struct { OAuth2Token *oauth2.Token IDTokenClaims *json.RawMessage // ID Token payload is just JSON. }{oauth2Token, new(json.RawMessage)} if err := idToken.Claims(&resp.IDTokenClaims); err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } data, err := json.MarshalIndent(resp, "", " ") if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } w.Write(data) }) log.Fatal(http.ListenAndServe("localhost:8181", nil)) } This program starts a regular http server with two endpoints. The first one ("/") is your regular endpoint that handles application logic. In this case, it only returns "hello world" to your client. The second endpoint ("/demo/callback") is used as a callback for keycloak. This endpoint needs to be registered on your keycloak server. Keycloak will issue a redirect to this callback URL upon successful user authentication. The redirect contains some additional query parameters. These parameters contain a code that can be used to obtain access/id tokens. Verify your setup In order to test this setup you can open a webbrowser and navitage to http://localhost:8181. The request should reach your go server, which tries to authenticate you. Since you did not send a token, the go server will redirecty you to keycloak to authenticate. You should see the login screen of keycloak. Login with the demo user you have created for this realm (demo/demo). If you have configured your keycloak correctly, it will authenticate you and redirect you to your go server callback. The end result should be a json like this { "OAuth2Token": { "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJsc1hHR2VxSmx3UUZweWVYR0x6b2plZXBYSEhXUngtTHVJTVVLdDBmNmlnIn0.eyJqdGkiOiI5ZjAxNjM2OC1lYmEwLTRiZjMtYTU5Ni1kOGU1MzdmNTNlZGYiLCJleHAiOjE1MzIxNzM2NTIsIm5iZiI6MCwiaWF0IjoxNTMyMTczMzUyLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvYXV0aC9yZWFsbXMvZGVtbyIsImF1ZCI6ImRlbW8tY2xpZW50Iiwic3ViIjoiMzgzMzhjOGItYWQ3Zi00NjlmLTgzOTgtMTc5ODk1ODFiYTEyIiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiZGVtby1jbGllbnQiLCJhdXRoX3RpbWUiOjE1MzIxNzMzNTIsInNlc3Npb25fc3RhdGUiOiJjZTg2NWFkZC02N2I4LTQ5MDUtOGYwMy05YzE2MDNjMWJhMGQiLCJhY3IiOiIxIiwiYWxsb3dlZC1vcmlnaW5zIjpbXSwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6Im9wZW5pZCBwcm9maWxlIGVtYWlsIiwiZW1haWxfdmVyaWZpZWQiOnRydWUsInByZWZlcnJlZF91c2VybmFtZSI6ImRlbW8iLCJlbWFpbCI6ImRlbW9AZGVtby5jb20ifQ.KERz8rBddxM9Qho3kgigX-fClWqbKY-3JcWT3JOQDoLa-prkorfa40BWlyf9ULVgjzT2d8FLJpqQIQYvucKU7Q7vFBVIjTGucUZaE7b6JGMea5H34A1i-MNm7L2CzDJ2GnBONhNwLKoftTSl0prbzwkzcVrps-JAZ6L2gssSa5hBBGJYBKAUfm1OIb57Jq0vzro3vLghZ4Ay7iNunwfcHUrxiFJfUjaU6PQwzrA5pnItOPuavJFUgso7-3JLtn3X9GQuyyZKrkDo6-gzU0JZmkQQzAXXgt43NxooryImuacwSB5xbIKY6qFkedldoOPehld1-oLv0Yy_FIwEad3uLw", "token_type": "bearer", "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJsc1hHR2VxSmx3UUZweWVYR0x6b2plZXBYSEhXUngtTHVJTVVLdDBmNmlnIn0.eyJqdGkiOiI0MjdmMTlhYy1jMTkzLTQ2YmQtYWFhNi0wY2Q1OTI5NmEwMGQiLCJleHAiOjE1MzIxNzUxNTIsIm5iZiI6MCwiaWF0IjoxNTMyMTczMzUyLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvYXV0aC9yZWFsbXMvZGVtbyIsImF1ZCI6ImRlbW8tY2xpZW50Iiwic3ViIjoiMzgzMzhjOGItYWQ3Zi00NjlmLTgzOTgtMTc5ODk1ODFiYTEyIiwidHlwIjoiUmVmcmVzaCIsImF6cCI6ImRlbW8tY2xpZW50IiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiY2U4NjVhZGQtNjdiOC00OTA1LThmMDMtOWMxNjAzYzFiYTBkIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6Im9wZW5pZCBwcm9maWxlIGVtYWlsIn0.FvvDW6ZSH8mlRR2zgaN1zesX14SmkCs9RrIVU4Jn1-SHVdKEA6YKur0-RUAFTObQDMLVhFFJ05AjGVGWpBrgVDcAwW2pI9saM-OHlyTJ3VfFoylgfzakVOIpbIDnHO12UaJrkOI9NWPAJdbBOzBHfsDhKbxhjg4ZX8SwlKr42rV4WWuSRcNu4_YDVO19SiXSCKXVldZ1_2S-qPvViq7VZfaoRLHuYyDvma_ByMsmib9JUkevJ8dxsYxVQ5FWaAfFanh1a1f8HxNRI-Cl180oPn1_Tqq_SYwxzBCw7Q_ENkMirwRS1a4cX9yMVEDW2uvKz2D-OiNAUK8d_ONuPEkTGQ", "expiry": "2018-07-21T13:47:28.986686385+02:00" }, "IDTokenClaims": { "jti": "f4d56526-37d9-4d32-b99d-81090e92d3a7", "exp": 1532173652, "nbf": 0, "iat": 1532173352, "iss": "http://localhost:8080/auth/realms/demo", "aud": "demo-client", "sub": "38338c8b-ad7f-469f-8398-17989581ba12", "typ": "ID", "azp": "demo-client", "auth_time": 1532173352, "session_state": "ce865add-67b8-4905-8f03-9c1603c1ba0d", "acr": "1", "email_verified": true, "preferred_username": "demo", "email": "demo@demo.com" } } You can copy your access token and use curl to verify if the server is able to accept your tokens: # use your complete access token here export TOKEN="eyJhbG..." curl -H "Authorization: Bearer $TOKEN" localhost:8181 # output hello world You can try it again after the token has expired - or temper with the token. In case you do it, you should get a redirect to your keycloak server again.
Keycloak
48,855,122
35
I am trying to get the REST API of keycloak to work. Thanks to this post I was able to get the token. But when trying the example for the list of users in the first answer, I get the error: "error": "RESTEASY003210: Could not find resource for full path: http://PATHTOCEAKLOAK:81/auth/user/realms/master/users" Here my request with Postman: As I am using a Bitnami-container the admin is called user that's why I am using /auth/user/ instead of /auth/admin/
For those who are still facing this error and using 17.0+ version of Keycloak, there's a change in endpoints as per the official documentation. I resolved this issue by just using {realm}/user and omitting /auth in between.
Keycloak
70,577,004
35
I updated to Spring Boot 3 in a project that uses the Keycloak Spring Adapter. Unfortunately, it doesn't start because the KeycloakWebSecurityConfigurerAdapter extends WebSecurityConfigurerAdapter which was first deprecated in Spring Security and then removed. Is there currently another way to implement security with Keycloak? Or to put it in other words: How can I use Spring Boot 3 in combination with the Keycloak adapter? I searched the Internet, but couldn't find any other version of the adapter.
You can't use Keycloak adapters with spring-boot 3 for the reason you found, plus a few others related to transitive dependencies. As most Keycloak adapters were deprecated in early 2022, it is very likely that no update will be published to fix that. Instead, use spring-security 6 libs for OAuth2. Don't panic, it's an easy task with spring-boot. In the following, I'll consider you have a good understanding of OAuth2 concepts and know exactly why you need to configure an OAuth2 client with oauth2Login (using authorization code flow, request authorization based on a session) or an OAuth2 resource server (no session, request authorization based on a Bearer token). In case of doubt, please refer to the OAuth2 essentials section of my tutorials. I'll only detail here the configuration of servlet application as a resource server, and then as a client, for a single Keycloak realm, with and then without spring-addons-starter-oidc, a Spring Boot starter of mine. Browse directly to the section you are interested in (but be prepared to write much more code if you don't want to use "my" starter). Also refer to my tutorials for different use-cases like: accepting tokens issued by multiple realms or instances (known in advance or dynamically created in a trusted domain) reactive applications (webflux), like spring-cloud-gateway for instance apps publicly serving both a REST API and a server-side rendered UI to consume it advanced access-control rules BFF pattern ... 1. OAuth2 Resource Server App exposes a REST API secured with access tokens. It is consumed by an OAuth2 REST client. A few sample of such clients: another Spring application configured as an OAuth2 client and using RestClient, WebClient, @FeignClient, RestTemplate or alike to query the resource server a Backend For Frontend (BFF) like a spring-cloud-gateway instance configured with oauth2Login() and the TokenRelay filter development tools like Postman capable of fetching OAuth2 tokens and issuing REST requests Javascript based application configured as a "public" OAuth2 client with a library like angular-auth-oidc-client (but warning, this is now discouraged in favor of the OAuth2 BFF pattern) 1.1. With spring-addons-starter-oidc <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-oauth2-resource-server</artifactId> </dependency> <dependency> <groupId>com.c4-soft.springaddons</groupId> <artifactId>spring-addons-starter-oidc</artifactId> <version>7.8.5</version> </dependency> origins: http://localhost:4200 issuer: http://localhost:8442/realms/master com: c4-soft: springaddons: oidc: ops: - iss: ${issuer} username-claim: preferred_username authorities: - path: $.realm_access.roles prefix: ROLE_ - path: $.resource_access.*.roles resourceserver: cors: - path: /my-resources/** allowed-origin-patterns: ${origins} permit-all: - "/actuator/health/readiness" - "/actuator/health/liveness" - "/v3/api-docs/**" Prefix for realm roles in the conf above are there only for illustration purposes, you might remove it. The CORS configuration would need some refinements too. @Configuration @EnableMethodSecurity public static class WebSecurityConfig { } Nothing more is needed to configure a resource-server with fine tuned CORS policy and authorities mapping. Bootiful, isn't it?. As you can guess from the ops property being an array, this solution is actually compatible with "static" multi-tenancy: you can declare as many trusted issuers as you need and it can be heterogeneous (use different claims for username and authorities). Also, this solution is compatible with reactive application: spring-addons-starter-oidc will detect it from what is on the classpath and adapt its security auto-configuration. 1.2. With just spring-boot-starter-oauth2-resource-server <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-oauth2-resource-server</artifactId> </dependency> <dependency> <!-- used when converting Keycloak roles to Spring authorities --> <groupId>com.jayway.jsonpath</groupId> <artifactId>json-path</artifactId> </dependency> spring: security: oauth2: resourceserver: jwt: issuer-uri: http://localhost:8442/realms/master @Configuration @EnableWebSecurity @EnableMethodSecurity public static class WebSecurityConfig { @Bean SecurityFilterChain filterChain(HttpSecurity http, Converter<Jwt, ? extends AbstractAuthenticationToken> jwtAuthenticationConverter) throws Exception { http.oauth2ResourceServer(oauth2 -> oauth2.jwt(jwt -> jwt.jwtAuthenticationConverter(jwtAuthenticationConverter))); // Enable and configure CORS http.cors(cors -> cors.configurationSource(corsConfigurationSource("http://localhost:4200"))); // State-less session (state in access-token only) http.sessionManagement(sm -> sm.sessionCreationPolicy(SessionCreationPolicy.STATELESS)); // Disable CSRF because of state-less session-management http.csrf(csrf -> csrf.disable()); // Return 401 (unauthorized) instead of 302 (redirect to login) when // authorization is missing or invalid http.exceptionHandling(eh -> eh.authenticationEntryPoint((request, response, authException) -> { response.addHeader(HttpHeaders.WWW_AUTHENTICATE, "Bearer realm=\"Restricted Content\""); response.sendError(HttpStatus.UNAUTHORIZED.value(), HttpStatus.UNAUTHORIZED.getReasonPhrase()); })); // @formatter:off http.authorizeHttpRequests(accessManagement -> accessManagement .requestMatchers("/actuator/health/readiness", "/actuator/health/liveness", "/v3/api-docs/**").permitAll() .anyRequest().authenticated() ); // @formatter:on return http.build(); } private UrlBasedCorsConfigurationSource corsConfigurationSource(String... origins) { final var configuration = new CorsConfiguration(); configuration.setAllowedOrigins(Arrays.asList(origins)); configuration.setAllowedMethods(List.of("*")); configuration.setAllowedHeaders(List.of("*")); configuration.setExposedHeaders(List.of("*")); final var source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/my-resources/**", configuration); return source; } @RequiredArgsConstructor static class JwtGrantedAuthoritiesConverter implements Converter<Jwt, Collection<? extends GrantedAuthority>> { @Override @SuppressWarnings({ "rawtypes", "unchecked" }) public Collection<? extends GrantedAuthority> convert(Jwt jwt) { return Stream.of("$.realm_access.roles", "$.resource_access.*.roles").flatMap(claimPaths -> { Object claim; try { claim = JsonPath.read(jwt.getClaims(), claimPaths); } catch (PathNotFoundException e) { claim = null; } if (claim == null) { return Stream.empty(); } if (claim instanceof String claimStr) { return Stream.of(claimStr.split(",")); } if (claim instanceof String[] claimArr) { return Stream.of(claimArr); } if (Collection.class.isAssignableFrom(claim.getClass())) { final var iter = ((Collection) claim).iterator(); if (!iter.hasNext()) { return Stream.empty(); } final var firstItem = iter.next(); if (firstItem instanceof String) { return (Stream<String>) ((Collection) claim).stream(); } if (Collection.class.isAssignableFrom(firstItem.getClass())) { return (Stream<String>) ((Collection) claim).stream().flatMap(colItem -> ((Collection) colItem).stream()).map(String.class::cast); } } return Stream.empty(); }) /* Insert some transformation here if you want to add a prefix like "ROLE_" or force upper-case authorities */ .map(SimpleGrantedAuthority::new) .map(GrantedAuthority.class::cast).toList(); } } @Component @RequiredArgsConstructor static class SpringAddonsJwtAuthenticationConverter implements Converter<Jwt, JwtAuthenticationToken> { @Override public JwtAuthenticationToken convert(Jwt jwt) { final var authorities = new JwtGrantedAuthoritiesConverter().convert(jwt); final String username = JsonPath.read(jwt.getClaims(), "preferred_username"); return new JwtAuthenticationToken(jwt, authorities, username); } } } In addition to being much more verbose than preceding one, this solution is also less flexible: not adapted to multi-tenancy (multiple Keycloak realms or instances) hardcoded allowed origins hardcoded claim names to fetch autorities from hardcoded "permitAll" path matchers 2. OAuth2 Client App exposes any kind of resources secured with sessions (not access tokens). It is consumed directly by a browser (or any other user agent capable of maintaining a session) without the need of a scripting language or OAuth2 client lib (authorization-code flow, logout and token storage are handled by Spring on the server). Common uses-cases are: applications with server-side rendered UI (with Thymeleaf, JSF, or whatever) spring-cloud-gateway used as Backend For Frontend: configured with oauth2Login and the TokenRelay filter (hides OAuth2 tokens from the browser and replaces session cookie with an access token before forwarding a request to downstream resource server(s)). 2.1. With spring-addons-starter-oidc <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-client</artifactId> </dependency> <dependency> <groupId>com.c4-soft.springaddons</groupId> <artifactId>spring-addons-starter-oidc</artifactId> <version>7.8.5</version> </dependency> issuer: http://localhost:8442/realms/master client-id: spring-addons-confidential client-secret: change-me client-uri: http://localhost:8080 spring: security: oauth2: client: provider: keycloak: issuer-uri: ${issuer} registration: keycloak-login: provider: keycloak authorization-grant-type: authorization_code client-id: ${client-id} client-secret: ${client-secret} scope: openid,profile,email,offline_access com: c4-soft: springaddons: oidc: ops: - iss: ${issuer} username-claim: preferred_username authorities: - path: $.realm_access.roles - path: $.resource_access.*.roles client: client-uri: ${client-uri} security-matchers: /** permit-all: - / - /login/** - /oauth2/** csrf: cookie-accessible-from-js post-login-redirect-path: /home post-logout-redirect-path: / @Configuration @EnableMethodSecurity public class WebSecurityConfig { } As for resource server, this solution works in reactive applications too. There is also an optional support for multi-tenancy on clients: allow a user to be logged in simultaneously on several OpenID Providers, on which he might have different usernames (subject by default, which is a UUID in Keycloak, and changes with each realm). 2.2. With just spring-boot-starter-oauth2-client <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-oauth2-client</artifactId> </dependency> <dependency> <!-- used when converting Keycloak roles to Spring authorities --> <groupId>com.jayway.jsonpath</groupId> <artifactId>json-path</artifactId> </dependency> issuer: http://localhost:8442/realms/master client-id: spring-addons-confidential client-secret: change-me spring: security: oauth2: client: provider: keycloak: issuer-uri: ${issuer} registration: keycloak-login: provider: keycloak authorization-grant-type: authorization_code client-id: ${client-id} client-secret: ${client-secret} scope: openid,profile,email,offline_access @Configuration @EnableWebSecurity @EnableMethodSecurity public class WebSecurityConfig { @Bean SecurityFilterChain clientSecurityFilterChain(HttpSecurity http, InMemoryClientRegistrationRepository clientRegistrationRepository) throws Exception { http.oauth2Login(withDefaults()); http.logout(logout -> { logout.logoutSuccessHandler(new OidcClientInitiatedLogoutSuccessHandler(clientRegistrationRepository)); }); // @formatter:off http.authorizeHttpRequests(ex -> ex .requestMatchers("/", "/login/**", "/oauth2/**").permitAll() .requestMatchers("/nice.html").hasAuthority("NICE") .anyRequest().authenticated()); // @formatter:on return http.build(); } @Component @RequiredArgsConstructor static class GrantedAuthoritiesMapperImpl implements GrantedAuthoritiesMapper { @Override public Collection<? extends GrantedAuthority> mapAuthorities(Collection<? extends GrantedAuthority> authorities) { Set<GrantedAuthority> mappedAuthorities = new HashSet<>(); authorities.forEach(authority -> { if (OidcUserAuthority.class.isInstance(authority)) { final var oidcUserAuthority = (OidcUserAuthority) authority; final var issuer = oidcUserAuthority.getIdToken().getClaimAsURL(JwtClaimNames.ISS); mappedAuthorities.addAll(extractAuthorities(oidcUserAuthority.getIdToken().getClaims())); } else if (OAuth2UserAuthority.class.isInstance(authority)) { try { final var oauth2UserAuthority = (OAuth2UserAuthority) authority; final var userAttributes = oauth2UserAuthority.getAttributes(); final var issuer = new URL(userAttributes.get(JwtClaimNames.ISS).toString()); mappedAuthorities.addAll(extractAuthorities(userAttributes)); } catch (MalformedURLException e) { throw new RuntimeException(e); } } }); return mappedAuthorities; }; @SuppressWarnings({ "rawtypes", "unchecked" }) private static Collection<GrantedAuthority> extractAuthorities(Map<String, Object> claims) { /* See resource server solution above for authorities mapping */ } } } 3. What is spring-addons-starter-oidc and why using it This starter is a standard Spring Boot starter with additional application properties used to auto-configure default beans and provide it to Spring Security. It is important to note that the auto-configured @Beans are almost all @ConditionalOnMissingBean which enables you to override it in your conf. It is open-source and you can change everything it pre-configures for you (refer to the Javadoc, the starter READMEs, or the many samples). You should read the starters source before deciding not to trust it, it is not that big. Start with imports resource, it defines what is loaded by Spring Boot for auto-configuration. In my opinion (and as demonstrated above), Spring Boot auto-configuration for OAuth2 can be pushed one step further to: make OAuth2 configuration more portable: with a configurable authorities converter, switching from an OIDC provider to another is just a matter of editing properties (Keycloak, Auth0, Cognito, Azure AD, etc.) ease app deployment on different environments: CORS configuration is controlled from properties file reduce drastically the amount of Java code (things get even more complicated if you are in multi-tenancy scenario) support more than just one issuer by default reduce chances of misconfiguration. For instance, it is frequent to see sample configurations with disabled CSRF protection on clients with oauth2Login (which is a major security breach as, in this case, requests authorization is based on sessions, the CSRF attack vector), or wasting resources with sessions on endpoints secured with access tokens
Keycloak
74,571,191
33
I'm trying to copy an entire directory from my docker image to my local machine. The image is a keycloak image, and I'd like to copy the themes folder so I can work on a custom theme. I am running the following command - docker cp 143v73628670f:keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding However I am getting the following response - Error response from daemon: Could not find the file keycloak/themes in container 143v73628670f When I connect to my container using - docker exec -t -i 143v73628670f /bin/bash I can navigate to the themes by using - cd keycloak/themes/ I can see it is located there and the files are as expected in the terminal. I'm running the instance locally on a Mac. How do I copy that entire themes folder to my local machine? What am I doing wrong please?
EDIT As a result of running 'pwd' your should run the Docker cp command as follows: docker cp 143v73628670f:/opt/jboss/keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding You are forgetting the trailing ' / '. Therefore your command should look like this: docker cp 143v73628670f:/keycloak/themes/ ~/Development/Code/Git/keycloak-recognition-login-branding Also, you could make use of Docker volumes, which allows you to pass a local directory into the container when you run the container
Keycloak
46,729,330
32
We are using keycloak-adapter with Jetty for authentication and authorization using Keycloak. As per Keycloak doc for OIDC Auth flow: Another important aspect of this flow is the concept of a public vs. a confidential client. Confidential clients are required to provide a client secret when they exchange the temporary codes for tokens. Public clients are not required to provide this client secret. Public clients are perfectly fine so long as HTTPS is strictly enforced and you are very strict about what redirect URIs are registered for the client. HTML5/JavaScript clients always have to be public clients because there is no way to transmit the client secret to them in a secure manner. We have webapps which connect to Jetty and use auth. So, we have created a public client and it works awesome for webapp/REST authentication. The problem is as soon as we enable authorization, client type gets converted to Confidential from Public and it does not allow the reset it as Public. Now, we are in soup. We cannot have public clients due to authorization and we cannot connect webapps to confidential client. This seems to be contradictory to us. Any idea why client needs to be confidential for authorization? Any help on this how can we overcome this issue? Thanks.
As far as I understood, you have your frontend and backend applications separated. If your frontend is a static web-app and not being served by the same backend application (server), and your backend is a simple REST API - then you would have two Keycloak clients configured: public client for the frontend app. It would be responsible for acquiring JWT tokens. bearer-only client, which would be attached to your backend application. To enable authorization you would create roles (either realm or client scoped, start on the realm level as it's easier to comprehend). Every user would then be assigned a role/s in the Keycloak admin UI. Based on this you should configure your keycloak adapter configuration (on the backend). All things considered, in order to talk to your REST API, you would attach a JWT token to each HTTP request in the Authorization header. Depending on your frontend framework, you can use either of these: Keycloak js adapter Other bindings (angular, react) P.S. For debugging I have just written a CLI tool called brauzie that would help you fetch and analyse your JWT tokens (scopes, roles, etc.). It could be used for both public and confidential clients. You could as well use Postman and https://jwt.io HTH :)
Keycloak
53,118,828
31
I have installed keycloack server 4.3.4. How to activate the REST API of keycloak (Add a user, enabled user, disabled a user ...) ? Regards
First step to do that is create an admin account (which you would have been prompted to do as soon as you would have opened {keycloak-url}/auth ). Next steps depend on how you want to create config. Through Admin console GUI or through Rest API. Steps to do this through Admin Rest API. First , you will have to get a token from {keycloak-url}/auth/realms/master/protocol/openid-connect/token like this: Note that only change you have to do in below call is your keycloak server address and value of admin username and password. Once you obtain a token from above call, you can use it on other Admin Rest API calls by setting Authorization header, with Bearer token_value. (replace token_value with one obtained in step 1 above) (Sharing an example below of sample rest call which gets list of users - https://www.keycloak.org/docs-api/10.0/rest-api/index.html#_users_resource ) {{SERVER}}/auth/admin/realms/myRealm/users EDIT: As pointed out by @Shane : as of Keycloak version 19.0.1 the /auth part of the urls have been removed.
Keycloak
53,283,281
31
I have a user base with identity and authentication managed by keycloak. I would like to allow these users to login and use AWS API Gateway services with Cognito using an OpenID Connect federation. The AWS documentation on using an OpenID Connect provider is somewhat lacking. I found an old reference using SAML but would prefer to avoid this and use OpenID Connect. If anybody has achieved this would they mind writing up some simple instructions from the keycloak admin perspective?
Answering my own question for future searchers based on advice I have received from AWS Support: The question itself was based on a misunderstanding. AWS Cognito does not authenticate users with Keycloak - the client application does that. Cognito Identity Federation is about granting access to AWS resources by creating AWS Access credentials to an identity with a token from an external identity provider. The OpenID client in keycloak is the one and same client that is used by the end-user application. Redirection URLs send the user back to the application, which then passes the JWT token to AWS to exchange for AWS credentials. Cognito relies on the client app first directing the user to the authentication provider of their choice (in this case Keycloak), and then passing the access token from Keycloak to Cognito which uses it to 1) create an identity if required, and 2) generate AWS credentials for access to the AWS role for "Authenticated" users in Cognito. An example using the AWS CLI Prerequisite: client app obtains JWT access token for the end user using any OpenID authentication method Create or retrieve an identity from cognito: aws cognito-identity get-id --cli-input-json file://test.json Returns the identity: { "IdentityId": "ap-southeast-2:<identity_uuid>" } (substitute ap-southeast-2 in the examples with your local region) test.json contains the details of the AWS Account, the Cognito pool and the user's JWT access token from Keycloak: { "AccountId": "123456789012", "IdentityPoolId": "ap-southeast-2:<cognito-identity-pool-uuid>", "Logins": { "keycloak.url/auth/realms/realmname": "<access_token_jwt>" } } The app can then use this returned identity, along with the JWT access token to obtain AWS Credentials with which to consume AWS services... aws cognito-identity get-credentials-for-identity --identity-id ap-southeast-2:<identity_uuid> --cli-input-json file://test2.json Returns an AccessKeyId, a SecretKey and an AWS SessionToken along with an expiry time. These can be used to access AWS services depending on the permissions of the authenticated role that was established in the setting for the Cognito Federated Identity Pool: { "Credentials": { "SecretKey": "<secret_key>", "SessionToken": "<aws_cli_session_token>", "Expiration": 1567891234.0, "AccessKeyId": "<access_key>" }, "IdentityId": "ap-southeast-2:<identity_uuid>" } The contents of test2.json { "IdentityId": "ap-southeast-2:<identity_uuid>", "Logins": { "keycloak.url/auth/realms/realmname": "<keycloak_access_token_jwt>" } } I hope this provides context and assistance to people that stumble across this question in future.
Keycloak
49,810,326
30
I am recently working on Keycloak 6.0.1 for SSO for authentication for multiple applications in organisation. I am confused in difference between clients and realm. If I have 5 different application to be managed for SSO then do I have to create 5 different clients or 5 different realm ? If I say I have to create 5 different Clients under 1 realm then could I execute different authentication flow for different client in same realm ?
According to Keycloak documentation Realm - A realm manages a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control. Clients are entities that can request Keycloak to authenticate a user. Most often, clients are applications and services that want to use Keycloak to secure themselves and provide a single sign-on solution. Clients can also be entities that just want to request identity information or an access token so that they can securely invoke other services on the network that are secured by Keycloak. For your scenario you can create 5 different clients under one realm. Keycloak provides out of the box support for Single Sign On. For more information refer to Keycloak documentation keycloak documentation link
Keycloak
56,561,554
30
Is there any existing Keycloak client for Asp.net Core? I have found a NuGet package for .net but it doesn't work with Core. Do you have any ideas how to easily integrate with this security server (or maybe using any other alternatives)?
I've played a bit with this today. The most straightforward way is too use OpenId standard. In Startup.cs I used OpenIdConnect Authentication: public void Configure(...) { (...) app.UseCookieAuthentication(new CookieAuthenticationOptions { AuthenticationScheme = CookieAuthenticationDefaults.AuthenticationScheme, AutomaticAuthenticate = true, CookieHttpOnly = true, CookieSecure = CookieSecurePolicy.SameAsRequest }); app.UseOpenIdConnectAuthentication(CreateKeycloakOpenIdConnectOptions());`(...) }` OpenIdConnectOptions method: private OpenIdConnectOptions CreateKeycloakOpenIdConnectOptions() { var options = new OpenIdConnectOptions { AuthenticationScheme = "oidc", SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme, Authority = Configuration["Authentication:KeycloakAuthentication:ServerAddress"]+"/auth/realms/"+ Configuration["Authentication:KeycloakAuthentication:Realm"], RequireHttpsMetadata = false, //only in development PostLogoutRedirectUri = Configuration["Authentication:KeycloakAuthentication:PostLogoutRedirectUri"], ClientId = Configuration["Authentication:KeycloakAuthentication:ClientId"], ClientSecret = Configuration["Authentication:KeycloakAuthentication:ClientSecret"], ResponseType = OpenIdConnectResponseType.Code, GetClaimsFromUserInfoEndpoint = true, SaveTokens = true }; options.Scope.Add("openid"); return options; } In appsettings.json add configuration for Keycloak: { (...), "Authentication": { "KeycloakAuthentication": { "ServerAddress": "http://localhost:8180", "Realm": "demo", "PostLogoutRedirectUri": "http://localhost:57630/", "ClientId": "KeycloakASPNETCore", "ClientSecret": "secret-get-it-in-keycloakConsole-client-credentials" } } } Keycloak client is configuerd as followed: Client settings, I've added 'accounting' role for test, I added mapper 'member_of' of type 'User Client Role' for roles so that roles are added in the claims If I want to Authorize user by role I do something like this: Add authorization by claims in ConfigureServices method: public void ConfigureServices(IServiceCollection services) { (...) services.AddAuthorization(options => { options.AddPolicy("Accounting", policy => policy.RequireClaim("member_of", "[accounting]")); //this claim value is an array. Any suggestions how to extract just single role? This still works. }); } I've edited get method in ValuesController (Default Web API template): [Authorize(Policy = "Accounting")] [Route("api/[controller]")] public class ValuesController : Controller { // GET api/values [HttpGet] public Dictionary<string,string> Get() { var userPrinciple = User as ClaimsPrincipal; var claims = new Dictionary<string, string>(); foreach (var claim in userPrinciple.Claims) { var key = claim.Type; var value = claim.Value; claims.Add(key, value); } return claims; } If I login with user that has accounting role or is in group that has accounting role, it should display my user claims on address localhost:57630/api/values. I hope this works for you. Edit: .NET Core 2 Hi everyone! The way my app works changed quite a bit and I have not fully tested .NET Core 2 yet, but you can still try connecting to Keycloak like this in ConfigureServices: services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddJwtBearer(options => { options.Authority = Configuration["Authentication:KeycloakAuthentication:ServerAddress"] + "/auth/realms/" + Configuration["Authentication:KeycloakAuthentication:Realm"]; options.TokenValidationParameters = new Microsoft.IdentityModel.Tokens.TokenValidationParameters { ValidAudiences = new string[] { "curl", "financeApplication", "accountingApplication", "swagger"} }; options.RequireHttpsMetadata = false; //for test only! options.SaveToken = true; options.Validate(); }); And in Configure: app.UseAuthentication(); You can access your token later with IHttpContextAccessor httpContextAccessor, for example: public KeycloakAuthorizationRequirementHandler(IConfiguration config, IHttpContextAccessor httpContextAccessor, IMemoryCache memoryCache) { _config = config; _httpContextAccessor = httpContextAccessor; _memoryCache = memoryCache; } //get accessToken var accessToken = _httpContextAccessor.HttpContext.GetTokenAsync("access_token"); _httpContextAccessor.HttpContext.Items["username"] = username; Tell me how it goes.
Keycloak
41,721,032
29
I know that there is admin APIs to get the list of users which returns the user representation array. GET /admin/realms/{realm}/groups/{id}/members returns https://www.keycloak.org/docs-api/2.5/rest-api/index.html#_userrepresentation but is there a way to get users by custom attribute ?
This is enabled out of the box from Keycloak version 15.1.0 Using GET /{realm}/users API, parameter q is introduced: A query to search for custom attributes, in the format 'key1:value2 key2:value2' curl 'http://{{keycloak_url}}/auth/admin/realms/{{realm}}/users?q=phone:123456789' You can also combine several attributes within this parameter using space ' ' delimiter curl 'http://{{keycloak_url}}/auth/admin/realms/{{realm}}/users?q=phone:123456789 country:USA' Docs: https://www.keycloak.org/docs-api/15.1/rest-api/index.html#_users_resource
Keycloak
54,667,407
29
The deployment is on AWS and I do not want to tunnel to the box and open a browser to disable it. There seems to exist a configuration: "ssl-required":"none" that can be placed in the keycloak-server.json file, but I'm not sure under which object. I've tried under "realm" and by itself with no luck. I do not want to disable it at the adapter level, it needs to be globally, so where does the "ssl-required":"none" go, or how can ssh/https be disabled globally? (Also, I understand this is not recommended in production.)
In the "master" realm, over login tab. Change 'Require SSL' property to none. If you can not access locally to keycloak and it is configured with a database for instance Postgres, then execute the following SQL sentence. update REALM set ssl_required = 'NONE' where id = 'master'; It is necessary to restart keycloak
Keycloak
38,337,895
28
Is there a way to get a list of users on a Keycloak realm via REST WITHOUT using an admin account? Maybe some sort of assignable role from the admin console? Looking for any ideas. Right now I'm using admin credentials to grab an access token, then using that token to pull users from the realm/users endpoint. Getting the token (from node.js app via request): uri: `${keycloakUri}/realms/master/protocol/openid-connect/token`, form: { grant_type: 'password', client_id: 'admin-cli', username: adminUsername, password: adminPassword, } Using the token: uri: `${keycloakUri}/admin/realms/${keycloakRealm}/users`, headers: { 'authorization': `bearer ${passwordGrantToken}`, } I want to be able to use generic user info (usernames, emails, fullnames) from a client application.
You need to assign the view-users role from the realm-management client, for the desired user. That would be the configuration for the user: Then you can grab all the users from the ${keycloakUri}/admin/realms/${keycloakRealm}/users endpoint. That's the info retrieved from the enpoint, accesed via Postman: Also, unrelated to the asked question, I strongly encourage you not to use grant_type=password unless you absolutelly need to. From the keycloak blog: RESULT=`curl --data "grant_type=password&client_id=curl&username=user&password=password" http://localhost:8180/auth/realms/master/protocol/openid-connect/token` This is a bit cryptic and luckily this is not how you should really be obtaining tokens. Tokens should be obtained by web applications by redirecting to the Keycloak login page. We're only doing this so we can test the service as we don't have an application that can invoke the service yet. Basically what we are doing here is invoking Keycloaks OpenID Connect token endpoint with grant type set to password which is the Resource Owner Credentials flow that allows swapping a username and a password for a token. See also the Oauth2 spec.
Keycloak
46,470,477
28
Running keycloak on standalone mode.and created a micro-service by using node.js adapter for authenticating api calls. jwt token from the keyclaok is sending along with each api calls. it will only respond if the token sent is a valid one. how can i validate the access token from the micro service? is there any token validation availed by keycloak?
To expand on troger19's answer: Question 1: How can I validate the access token from the micro service? Implement a function to inspect each request for a bearer token and send that token off for validation by your keycloak server at the userinfo endpoint before it is passed to your api's route handlers. You can find your keycloak server's specific endpoints (like the userinfo route) by requesting its well-known configuration. If you are using expressjs in your node api this might look like the following: const express = require("express"); const request = require("request"); const app = express(); /* * additional express app config * app.use(bodyParser.json()); * app.use(bodyParser.urlencoded({ extended: false })); */ const keycloakHost = 'your keycloak host'; const keycloakPort = 'your keycloak port'; const realmName = 'your keycloak realm'; // check each request for a valid bearer token app.use((req, res, next) => { // assumes bearer token is passed as an authorization header if (req.headers.authorization) { // configure the request to your keycloak server const options = { method: 'GET', url: `https://${keycloakHost}:${keycloakPort}/auth/realms/${realmName}/protocol/openid-connect/userinfo`, headers: { // add the token you received to the userinfo request, sent to keycloak Authorization: req.headers.authorization, }, }; // send a request to the userinfo endpoint on keycloak request(options, (error, response, body) => { if (error) throw new Error(error); // if the request status isn't "OK", the token is invalid if (response.statusCode !== 200) { res.status(401).json({ error: `unauthorized`, }); } // the token is valid pass request onto your next function else { next(); } }); } else { // there is no token, don't process request further res.status(401).json({ error: `unauthorized`, }); }); // configure your other routes app.use('/some-route', (req, res) => { /* * api route logic */ }); // catch 404 and forward to error handler app.use((req, res, next) => { const err = new Error('Not Found'); err.status = 404; next(err); }); Question 2: Is there any token validation availed by Keycloak? Making a request to Keycloak's userinfo endpoint is an easy way to verify that your token is valid. Userinfo response from valid token: Status: 200 OK { "sub": "xxx-xxx-xxx-xxx-xxx", "name": "John Smith", "preferred_username": "jsmith", "given_name": "John", "family_name": "Smith", "email": "john.smith@example.com" } Userinfo response from invalid valid token: Status: 401 Unauthorized { "error": "invalid_token", "error_description": "Token invalid: Token is not active" } Additional Information: Keycloak provides its own npm package called keycloak-connect. The documentation describes simple authentication on routes, requiring users to be logged in to access a resource: app.get( '/complain', keycloak.protect(), complaintHandler ); I have not found this method to work using bearer-only authentication. In my experience, implementing this simple authentication method on a route results in an "access denied" response. This question also asks about how to authenticate a rest api using a Keycloak access token. The accepted answer recommends using the simple authentication method provided by keycloak-connect as well but as Alex states in the comments: "The keyloak.protect() function (doesn't) get the bearer token from the header. I'm still searching for this solution to do bearer only authentication – alex Nov 2 '17 at 14:02
Keycloak
48,274,251
28